00:00:00.000 Started by upstream project "autotest-per-patch" build number 132718 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.100 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.122 Fetching changes from the remote Git repository 00:00:00.126 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.172 > git --version # 'git version 2.39.2' 00:00:00.172 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.194 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.194 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.527 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.539 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.550 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.550 > git config core.sparsecheckout # timeout=10 00:00:04.560 > git read-tree -mu HEAD # timeout=10 00:00:04.575 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.597 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.597 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.679 [Pipeline] Start of Pipeline 00:00:04.692 [Pipeline] library 00:00:04.694 Loading library shm_lib@master 00:00:04.694 Library shm_lib@master is cached. Copying from home. 00:00:04.713 [Pipeline] node 00:00:04.722 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.723 [Pipeline] { 00:00:04.733 [Pipeline] catchError 00:00:04.734 [Pipeline] { 00:00:04.747 [Pipeline] wrap 00:00:04.755 [Pipeline] { 00:00:04.761 [Pipeline] stage 00:00:04.763 [Pipeline] { (Prologue) 00:00:04.960 [Pipeline] sh 00:00:05.244 + logger -p user.info -t JENKINS-CI 00:00:05.265 [Pipeline] echo 00:00:05.266 Node: CYP12 00:00:05.272 [Pipeline] sh 00:00:05.571 [Pipeline] setCustomBuildProperty 00:00:05.580 [Pipeline] echo 00:00:05.582 Cleanup processes 00:00:05.586 [Pipeline] sh 00:00:05.869 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.869 3078859 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.884 [Pipeline] sh 00:00:06.170 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.170 ++ grep -v 'sudo pgrep' 00:00:06.170 ++ awk '{print $1}' 00:00:06.170 + sudo kill -9 00:00:06.170 + true 00:00:06.186 [Pipeline] cleanWs 00:00:06.196 [WS-CLEANUP] Deleting project workspace... 00:00:06.197 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.204 [WS-CLEANUP] done 00:00:06.209 [Pipeline] setCustomBuildProperty 00:00:06.226 [Pipeline] sh 00:00:06.510 + sudo git config --global --replace-all safe.directory '*' 00:00:06.593 [Pipeline] httpRequest 00:00:07.312 [Pipeline] echo 00:00:07.314 Sorcerer 10.211.164.101 is alive 00:00:07.321 [Pipeline] retry 00:00:07.323 [Pipeline] { 00:00:07.335 [Pipeline] httpRequest 00:00:07.340 HttpMethod: GET 00:00:07.341 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.341 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.353 Response Code: HTTP/1.1 200 OK 00:00:07.353 Success: Status code 200 is in the accepted range: 200,404 00:00:07.354 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.007 [Pipeline] } 00:00:09.026 [Pipeline] // retry 00:00:09.035 [Pipeline] sh 00:00:09.385 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.401 [Pipeline] httpRequest 00:00:09.817 [Pipeline] echo 00:00:09.819 Sorcerer 10.211.164.101 is alive 00:00:09.829 [Pipeline] retry 00:00:09.831 [Pipeline] { 00:00:09.845 [Pipeline] httpRequest 00:00:09.850 HttpMethod: GET 00:00:09.850 URL: http://10.211.164.101/packages/spdk_500d7608431001c9b7144808a6c684c47e67d513.tar.gz 00:00:09.851 Sending request to url: http://10.211.164.101/packages/spdk_500d7608431001c9b7144808a6c684c47e67d513.tar.gz 00:00:09.855 Response Code: HTTP/1.1 200 OK 00:00:09.856 Success: Status code 200 is in the accepted range: 200,404 00:00:09.856 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_500d7608431001c9b7144808a6c684c47e67d513.tar.gz 00:00:36.333 [Pipeline] } 00:00:36.350 [Pipeline] // retry 00:00:36.357 [Pipeline] sh 00:00:36.644 + tar --no-same-owner -xf spdk_500d7608431001c9b7144808a6c684c47e67d513.tar.gz 00:00:39.976 [Pipeline] sh 00:00:40.261 + git -C spdk log --oneline -n5 00:00:40.261 500d76084 nvmf: added support for add/delete host wrt referral 00:00:40.261 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:00:40.261 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:00:40.261 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:00:40.261 e2dfdf06c accel/mlx5: Register post_poller handler 00:00:40.273 [Pipeline] } 00:00:40.287 [Pipeline] // stage 00:00:40.295 [Pipeline] stage 00:00:40.298 [Pipeline] { (Prepare) 00:00:40.315 [Pipeline] writeFile 00:00:40.331 [Pipeline] sh 00:00:40.618 + logger -p user.info -t JENKINS-CI 00:00:40.633 [Pipeline] sh 00:00:40.920 + logger -p user.info -t JENKINS-CI 00:00:40.934 [Pipeline] sh 00:00:41.224 + cat autorun-spdk.conf 00:00:41.224 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.224 SPDK_TEST_NVMF=1 00:00:41.224 SPDK_TEST_NVME_CLI=1 00:00:41.224 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.224 SPDK_TEST_NVMF_NICS=e810 00:00:41.224 SPDK_TEST_VFIOUSER=1 00:00:41.224 SPDK_RUN_UBSAN=1 00:00:41.224 NET_TYPE=phy 00:00:41.234 RUN_NIGHTLY=0 00:00:41.239 [Pipeline] readFile 00:00:41.266 [Pipeline] withEnv 00:00:41.269 [Pipeline] { 00:00:41.282 [Pipeline] sh 00:00:41.571 + set -ex 00:00:41.571 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:41.571 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:41.571 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.571 ++ SPDK_TEST_NVMF=1 00:00:41.571 ++ SPDK_TEST_NVME_CLI=1 00:00:41.571 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.571 ++ SPDK_TEST_NVMF_NICS=e810 00:00:41.571 ++ SPDK_TEST_VFIOUSER=1 00:00:41.571 ++ SPDK_RUN_UBSAN=1 00:00:41.571 ++ NET_TYPE=phy 00:00:41.571 ++ RUN_NIGHTLY=0 00:00:41.571 + case $SPDK_TEST_NVMF_NICS in 00:00:41.571 + DRIVERS=ice 00:00:41.571 + [[ tcp == \r\d\m\a ]] 00:00:41.571 + [[ -n ice ]] 00:00:41.571 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:41.571 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:41.571 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:41.571 rmmod: ERROR: Module irdma is not currently loaded 00:00:41.571 rmmod: ERROR: Module i40iw is not currently loaded 00:00:41.571 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:41.571 + true 00:00:41.571 + for D in $DRIVERS 00:00:41.571 + sudo modprobe ice 00:00:41.571 + exit 0 00:00:41.581 [Pipeline] } 00:00:41.597 [Pipeline] // withEnv 00:00:41.604 [Pipeline] } 00:00:41.618 [Pipeline] // stage 00:00:41.629 [Pipeline] catchError 00:00:41.631 [Pipeline] { 00:00:41.645 [Pipeline] timeout 00:00:41.645 Timeout set to expire in 1 hr 0 min 00:00:41.647 [Pipeline] { 00:00:41.661 [Pipeline] stage 00:00:41.663 [Pipeline] { (Tests) 00:00:41.680 [Pipeline] sh 00:00:41.972 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.972 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.972 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.972 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:41.972 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.972 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:41.972 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:41.972 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:41.972 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:41.972 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:41.972 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:41.972 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:41.972 + source /etc/os-release 00:00:41.972 ++ NAME='Fedora Linux' 00:00:41.972 ++ VERSION='39 (Cloud Edition)' 00:00:41.972 ++ ID=fedora 00:00:41.973 ++ VERSION_ID=39 00:00:41.973 ++ VERSION_CODENAME= 00:00:41.973 ++ PLATFORM_ID=platform:f39 00:00:41.973 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:41.973 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:41.973 ++ LOGO=fedora-logo-icon 00:00:41.973 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:41.973 ++ HOME_URL=https://fedoraproject.org/ 00:00:41.973 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:41.973 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:41.973 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:41.973 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:41.973 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:41.973 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:41.973 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:41.973 ++ SUPPORT_END=2024-11-12 00:00:41.973 ++ VARIANT='Cloud Edition' 00:00:41.973 ++ VARIANT_ID=cloud 00:00:41.973 + uname -a 00:00:41.973 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:41.973 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:45.272 Hugepages 00:00:45.272 node hugesize free / total 00:00:45.272 node0 1048576kB 0 / 0 00:00:45.272 node0 2048kB 0 / 0 00:00:45.272 node1 1048576kB 0 / 0 00:00:45.272 node1 2048kB 0 / 0 00:00:45.272 00:00:45.272 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:45.272 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:45.272 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:45.272 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:45.272 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:45.272 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:45.272 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:45.272 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:45.272 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:45.272 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:45.272 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:45.272 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:45.272 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:45.272 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:45.272 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:45.272 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:45.272 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:45.272 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:45.272 + rm -f /tmp/spdk-ld-path 00:00:45.272 + source autorun-spdk.conf 00:00:45.272 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.272 ++ SPDK_TEST_NVMF=1 00:00:45.272 ++ SPDK_TEST_NVME_CLI=1 00:00:45.272 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.272 ++ SPDK_TEST_NVMF_NICS=e810 00:00:45.272 ++ SPDK_TEST_VFIOUSER=1 00:00:45.272 ++ SPDK_RUN_UBSAN=1 00:00:45.272 ++ NET_TYPE=phy 00:00:45.272 ++ RUN_NIGHTLY=0 00:00:45.272 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:45.272 + [[ -n '' ]] 00:00:45.272 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:45.272 + for M in /var/spdk/build-*-manifest.txt 00:00:45.272 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:45.272 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:45.272 + for M in /var/spdk/build-*-manifest.txt 00:00:45.272 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:45.272 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:45.272 + for M in /var/spdk/build-*-manifest.txt 00:00:45.272 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:45.272 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:45.272 ++ uname 00:00:45.272 + [[ Linux == \L\i\n\u\x ]] 00:00:45.272 + sudo dmesg -T 00:00:45.272 + sudo dmesg --clear 00:00:45.272 + dmesg_pid=3079959 00:00:45.272 + [[ Fedora Linux == FreeBSD ]] 00:00:45.272 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:45.272 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:45.272 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:45.272 + [[ -x /usr/src/fio-static/fio ]] 00:00:45.272 + export FIO_BIN=/usr/src/fio-static/fio 00:00:45.272 + FIO_BIN=/usr/src/fio-static/fio 00:00:45.272 + sudo dmesg -Tw 00:00:45.272 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:45.272 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:45.272 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:45.272 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:45.272 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:45.272 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:45.272 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:45.272 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:45.272 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:45.272 10:59:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:00:45.272 10:59:51 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:45.272 10:59:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.272 10:59:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:45.272 10:59:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:45.272 10:59:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.272 10:59:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:45.272 10:59:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:45.272 10:59:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:45.272 10:59:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:45.272 10:59:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:45.272 10:59:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:45.272 10:59:51 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:45.273 10:59:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:00:45.273 10:59:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:45.273 10:59:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:45.273 10:59:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:45.273 10:59:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:45.273 10:59:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:45.273 10:59:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.273 10:59:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.273 10:59:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.273 10:59:51 -- paths/export.sh@5 -- $ export PATH 00:00:45.273 10:59:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.273 10:59:51 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:45.273 10:59:51 -- common/autobuild_common.sh@493 -- $ date +%s 00:00:45.273 10:59:51 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733479191.XXXXXX 00:00:45.273 10:59:51 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733479191.7Cldxt 00:00:45.273 10:59:51 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:00:45.273 10:59:51 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:00:45.273 10:59:51 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:45.273 10:59:51 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:45.273 10:59:51 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:45.273 10:59:51 -- common/autobuild_common.sh@509 -- $ get_config_params 00:00:45.273 10:59:51 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:45.273 10:59:51 -- common/autotest_common.sh@10 -- $ set +x 00:00:45.273 10:59:51 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:45.273 10:59:51 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:00:45.273 10:59:51 -- pm/common@17 -- $ local monitor 00:00:45.273 10:59:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.273 10:59:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.273 10:59:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.273 10:59:51 -- pm/common@21 -- $ date +%s 00:00:45.273 10:59:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.273 10:59:51 -- pm/common@21 -- $ date +%s 00:00:45.273 10:59:51 -- pm/common@25 -- $ sleep 1 00:00:45.273 10:59:51 -- pm/common@21 -- $ date +%s 00:00:45.273 10:59:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733479191 00:00:45.273 10:59:51 -- pm/common@21 -- $ date +%s 00:00:45.273 10:59:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733479191 00:00:45.273 10:59:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733479191 00:00:45.273 10:59:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733479191 00:00:45.273 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733479191_collect-cpu-load.pm.log 00:00:45.273 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733479191_collect-cpu-temp.pm.log 00:00:45.273 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733479191_collect-vmstat.pm.log 00:00:45.273 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733479191_collect-bmc-pm.bmc.pm.log 00:00:46.215 10:59:52 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:00:46.215 10:59:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:46.215 10:59:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:46.215 10:59:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.215 10:59:52 -- spdk/autobuild.sh@16 -- $ date -u 00:00:46.215 Fri Dec 6 09:59:52 AM UTC 2024 00:00:46.215 10:59:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:46.215 v25.01-pre-304-g500d76084 00:00:46.215 10:59:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:46.215 10:59:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:46.215 10:59:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:46.215 10:59:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:46.215 10:59:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:46.215 10:59:52 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.476 ************************************ 00:00:46.476 START TEST ubsan 00:00:46.476 ************************************ 00:00:46.476 10:59:52 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:46.476 using ubsan 00:00:46.476 00:00:46.476 real 0m0.001s 00:00:46.476 user 0m0.000s 00:00:46.476 sys 0m0.000s 00:00:46.476 10:59:52 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:46.476 10:59:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:46.476 ************************************ 00:00:46.476 END TEST ubsan 00:00:46.476 ************************************ 00:00:46.476 10:59:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:46.476 10:59:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:46.476 10:59:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:46.476 10:59:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:46.476 10:59:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:46.476 10:59:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:46.476 10:59:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:46.476 10:59:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:46.476 10:59:52 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:46.476 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:46.476 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:47.048 Using 'verbs' RDMA provider 00:01:02.899 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:15.137 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:15.137 Creating mk/config.mk...done. 00:01:15.137 Creating mk/cc.flags.mk...done. 00:01:15.137 Type 'make' to build. 00:01:15.137 11:00:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:15.137 11:00:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:15.137 11:00:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:15.137 11:00:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.138 ************************************ 00:01:15.138 START TEST make 00:01:15.138 ************************************ 00:01:15.138 11:00:20 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:15.138 make[1]: Nothing to be done for 'all'. 00:01:16.521 The Meson build system 00:01:16.521 Version: 1.5.0 00:01:16.521 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:16.521 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:16.521 Build type: native build 00:01:16.521 Project name: libvfio-user 00:01:16.521 Project version: 0.0.1 00:01:16.521 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:16.521 C linker for the host machine: cc ld.bfd 2.40-14 00:01:16.521 Host machine cpu family: x86_64 00:01:16.521 Host machine cpu: x86_64 00:01:16.521 Run-time dependency threads found: YES 00:01:16.521 Library dl found: YES 00:01:16.521 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:16.521 Run-time dependency json-c found: YES 0.17 00:01:16.521 Run-time dependency cmocka found: YES 1.1.7 00:01:16.521 Program pytest-3 found: NO 00:01:16.521 Program flake8 found: NO 00:01:16.521 Program misspell-fixer found: NO 00:01:16.521 Program restructuredtext-lint found: NO 00:01:16.521 Program valgrind found: YES (/usr/bin/valgrind) 00:01:16.521 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:16.522 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:16.522 Compiler for C supports arguments -Wwrite-strings: YES 00:01:16.522 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:16.522 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:16.522 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:16.522 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:16.522 Build targets in project: 8 00:01:16.522 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:16.522 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:16.522 00:01:16.522 libvfio-user 0.0.1 00:01:16.522 00:01:16.522 User defined options 00:01:16.522 buildtype : debug 00:01:16.522 default_library: shared 00:01:16.522 libdir : /usr/local/lib 00:01:16.522 00:01:16.522 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:16.779 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:17.037 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:17.037 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:17.037 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:17.037 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:17.037 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:17.037 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:17.037 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:17.037 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:17.037 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:17.037 [10/37] Compiling C object samples/null.p/null.c.o 00:01:17.037 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:17.037 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:17.037 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:17.037 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:17.037 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:17.037 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:17.037 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:17.037 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:17.037 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:17.037 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:17.037 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:17.037 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:17.037 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:17.037 [24/37] Compiling C object samples/server.p/server.c.o 00:01:17.037 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:17.037 [26/37] Compiling C object samples/client.p/client.c.o 00:01:17.037 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:17.037 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:17.037 [29/37] Linking target samples/client 00:01:17.037 [30/37] Linking target test/unit_tests 00:01:17.037 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:17.295 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:17.295 [33/37] Linking target samples/gpio-pci-idio-16 00:01:17.295 [34/37] Linking target samples/null 00:01:17.295 [35/37] Linking target samples/lspci 00:01:17.295 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:17.295 [37/37] Linking target samples/server 00:01:17.295 INFO: autodetecting backend as ninja 00:01:17.295 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:17.295 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:17.863 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:17.863 ninja: no work to do. 00:01:24.440 The Meson build system 00:01:24.440 Version: 1.5.0 00:01:24.440 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:24.440 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:24.440 Build type: native build 00:01:24.440 Program cat found: YES (/usr/bin/cat) 00:01:24.440 Project name: DPDK 00:01:24.440 Project version: 24.03.0 00:01:24.440 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:24.440 C linker for the host machine: cc ld.bfd 2.40-14 00:01:24.440 Host machine cpu family: x86_64 00:01:24.440 Host machine cpu: x86_64 00:01:24.440 Message: ## Building in Developer Mode ## 00:01:24.440 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:24.440 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:24.440 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:24.440 Program python3 found: YES (/usr/bin/python3) 00:01:24.440 Program cat found: YES (/usr/bin/cat) 00:01:24.440 Compiler for C supports arguments -march=native: YES 00:01:24.440 Checking for size of "void *" : 8 00:01:24.440 Checking for size of "void *" : 8 (cached) 00:01:24.440 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:24.440 Library m found: YES 00:01:24.440 Library numa found: YES 00:01:24.440 Has header "numaif.h" : YES 00:01:24.440 Library fdt found: NO 00:01:24.440 Library execinfo found: NO 00:01:24.440 Has header "execinfo.h" : YES 00:01:24.440 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:24.440 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:24.440 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:24.440 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:24.440 Run-time dependency openssl found: YES 3.1.1 00:01:24.440 Run-time dependency libpcap found: YES 1.10.4 00:01:24.440 Has header "pcap.h" with dependency libpcap: YES 00:01:24.440 Compiler for C supports arguments -Wcast-qual: YES 00:01:24.440 Compiler for C supports arguments -Wdeprecated: YES 00:01:24.440 Compiler for C supports arguments -Wformat: YES 00:01:24.440 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:24.440 Compiler for C supports arguments -Wformat-security: NO 00:01:24.440 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.440 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:24.440 Compiler for C supports arguments -Wnested-externs: YES 00:01:24.440 Compiler for C supports arguments -Wold-style-definition: YES 00:01:24.440 Compiler for C supports arguments -Wpointer-arith: YES 00:01:24.440 Compiler for C supports arguments -Wsign-compare: YES 00:01:24.440 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:24.440 Compiler for C supports arguments -Wundef: YES 00:01:24.440 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.440 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:24.440 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:24.440 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.440 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:24.440 Program objdump found: YES (/usr/bin/objdump) 00:01:24.440 Compiler for C supports arguments -mavx512f: YES 00:01:24.440 Checking if "AVX512 checking" compiles: YES 00:01:24.440 Fetching value of define "__SSE4_2__" : 1 00:01:24.440 Fetching value of define "__AES__" : 1 00:01:24.440 Fetching value of define "__AVX__" : 1 00:01:24.440 Fetching value of define "__AVX2__" : 1 00:01:24.440 Fetching value of define "__AVX512BW__" : 1 00:01:24.440 Fetching value of define "__AVX512CD__" : 1 00:01:24.440 Fetching value of define "__AVX512DQ__" : 1 00:01:24.440 Fetching value of define "__AVX512F__" : 1 00:01:24.440 Fetching value of define "__AVX512VL__" : 1 00:01:24.440 Fetching value of define "__PCLMUL__" : 1 00:01:24.440 Fetching value of define "__RDRND__" : 1 00:01:24.440 Fetching value of define "__RDSEED__" : 1 00:01:24.440 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:24.440 Fetching value of define "__znver1__" : (undefined) 00:01:24.441 Fetching value of define "__znver2__" : (undefined) 00:01:24.441 Fetching value of define "__znver3__" : (undefined) 00:01:24.441 Fetching value of define "__znver4__" : (undefined) 00:01:24.441 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:24.441 Message: lib/log: Defining dependency "log" 00:01:24.441 Message: lib/kvargs: Defining dependency "kvargs" 00:01:24.441 Message: lib/telemetry: Defining dependency "telemetry" 00:01:24.441 Checking for function "getentropy" : NO 00:01:24.441 Message: lib/eal: Defining dependency "eal" 00:01:24.441 Message: lib/ring: Defining dependency "ring" 00:01:24.441 Message: lib/rcu: Defining dependency "rcu" 00:01:24.441 Message: lib/mempool: Defining dependency "mempool" 00:01:24.441 Message: lib/mbuf: Defining dependency "mbuf" 00:01:24.441 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:24.441 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:24.441 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:24.441 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:24.441 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:24.441 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:24.441 Compiler for C supports arguments -mpclmul: YES 00:01:24.441 Compiler for C supports arguments -maes: YES 00:01:24.441 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:24.441 Compiler for C supports arguments -mavx512bw: YES 00:01:24.441 Compiler for C supports arguments -mavx512dq: YES 00:01:24.441 Compiler for C supports arguments -mavx512vl: YES 00:01:24.441 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:24.441 Compiler for C supports arguments -mavx2: YES 00:01:24.441 Compiler for C supports arguments -mavx: YES 00:01:24.441 Message: lib/net: Defining dependency "net" 00:01:24.441 Message: lib/meter: Defining dependency "meter" 00:01:24.441 Message: lib/ethdev: Defining dependency "ethdev" 00:01:24.441 Message: lib/pci: Defining dependency "pci" 00:01:24.441 Message: lib/cmdline: Defining dependency "cmdline" 00:01:24.441 Message: lib/hash: Defining dependency "hash" 00:01:24.441 Message: lib/timer: Defining dependency "timer" 00:01:24.441 Message: lib/compressdev: Defining dependency "compressdev" 00:01:24.441 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:24.441 Message: lib/dmadev: Defining dependency "dmadev" 00:01:24.441 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:24.441 Message: lib/power: Defining dependency "power" 00:01:24.441 Message: lib/reorder: Defining dependency "reorder" 00:01:24.441 Message: lib/security: Defining dependency "security" 00:01:24.441 Has header "linux/userfaultfd.h" : YES 00:01:24.441 Has header "linux/vduse.h" : YES 00:01:24.441 Message: lib/vhost: Defining dependency "vhost" 00:01:24.441 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:24.441 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:24.441 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:24.441 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:24.441 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:24.441 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:24.441 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:24.441 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:24.441 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:24.441 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:24.441 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:24.441 Configuring doxy-api-html.conf using configuration 00:01:24.441 Configuring doxy-api-man.conf using configuration 00:01:24.441 Program mandb found: YES (/usr/bin/mandb) 00:01:24.441 Program sphinx-build found: NO 00:01:24.441 Configuring rte_build_config.h using configuration 00:01:24.441 Message: 00:01:24.441 ================= 00:01:24.441 Applications Enabled 00:01:24.441 ================= 00:01:24.441 00:01:24.441 apps: 00:01:24.441 00:01:24.441 00:01:24.441 Message: 00:01:24.441 ================= 00:01:24.441 Libraries Enabled 00:01:24.441 ================= 00:01:24.441 00:01:24.441 libs: 00:01:24.441 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:24.441 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:24.441 cryptodev, dmadev, power, reorder, security, vhost, 00:01:24.441 00:01:24.441 Message: 00:01:24.441 =============== 00:01:24.441 Drivers Enabled 00:01:24.441 =============== 00:01:24.441 00:01:24.441 common: 00:01:24.441 00:01:24.441 bus: 00:01:24.441 pci, vdev, 00:01:24.441 mempool: 00:01:24.441 ring, 00:01:24.441 dma: 00:01:24.441 00:01:24.441 net: 00:01:24.441 00:01:24.441 crypto: 00:01:24.441 00:01:24.441 compress: 00:01:24.441 00:01:24.441 vdpa: 00:01:24.441 00:01:24.441 00:01:24.441 Message: 00:01:24.441 ================= 00:01:24.441 Content Skipped 00:01:24.441 ================= 00:01:24.441 00:01:24.441 apps: 00:01:24.441 dumpcap: explicitly disabled via build config 00:01:24.441 graph: explicitly disabled via build config 00:01:24.441 pdump: explicitly disabled via build config 00:01:24.441 proc-info: explicitly disabled via build config 00:01:24.441 test-acl: explicitly disabled via build config 00:01:24.441 test-bbdev: explicitly disabled via build config 00:01:24.441 test-cmdline: explicitly disabled via build config 00:01:24.441 test-compress-perf: explicitly disabled via build config 00:01:24.441 test-crypto-perf: explicitly disabled via build config 00:01:24.441 test-dma-perf: explicitly disabled via build config 00:01:24.441 test-eventdev: explicitly disabled via build config 00:01:24.441 test-fib: explicitly disabled via build config 00:01:24.441 test-flow-perf: explicitly disabled via build config 00:01:24.441 test-gpudev: explicitly disabled via build config 00:01:24.441 test-mldev: explicitly disabled via build config 00:01:24.441 test-pipeline: explicitly disabled via build config 00:01:24.441 test-pmd: explicitly disabled via build config 00:01:24.441 test-regex: explicitly disabled via build config 00:01:24.441 test-sad: explicitly disabled via build config 00:01:24.441 test-security-perf: explicitly disabled via build config 00:01:24.441 00:01:24.441 libs: 00:01:24.441 argparse: explicitly disabled via build config 00:01:24.441 metrics: explicitly disabled via build config 00:01:24.441 acl: explicitly disabled via build config 00:01:24.441 bbdev: explicitly disabled via build config 00:01:24.441 bitratestats: explicitly disabled via build config 00:01:24.441 bpf: explicitly disabled via build config 00:01:24.441 cfgfile: explicitly disabled via build config 00:01:24.441 distributor: explicitly disabled via build config 00:01:24.441 efd: explicitly disabled via build config 00:01:24.441 eventdev: explicitly disabled via build config 00:01:24.441 dispatcher: explicitly disabled via build config 00:01:24.441 gpudev: explicitly disabled via build config 00:01:24.441 gro: explicitly disabled via build config 00:01:24.441 gso: explicitly disabled via build config 00:01:24.441 ip_frag: explicitly disabled via build config 00:01:24.441 jobstats: explicitly disabled via build config 00:01:24.441 latencystats: explicitly disabled via build config 00:01:24.441 lpm: explicitly disabled via build config 00:01:24.441 member: explicitly disabled via build config 00:01:24.441 pcapng: explicitly disabled via build config 00:01:24.441 rawdev: explicitly disabled via build config 00:01:24.441 regexdev: explicitly disabled via build config 00:01:24.441 mldev: explicitly disabled via build config 00:01:24.442 rib: explicitly disabled via build config 00:01:24.442 sched: explicitly disabled via build config 00:01:24.442 stack: explicitly disabled via build config 00:01:24.442 ipsec: explicitly disabled via build config 00:01:24.442 pdcp: explicitly disabled via build config 00:01:24.442 fib: explicitly disabled via build config 00:01:24.442 port: explicitly disabled via build config 00:01:24.442 pdump: explicitly disabled via build config 00:01:24.442 table: explicitly disabled via build config 00:01:24.442 pipeline: explicitly disabled via build config 00:01:24.442 graph: explicitly disabled via build config 00:01:24.442 node: explicitly disabled via build config 00:01:24.442 00:01:24.442 drivers: 00:01:24.442 common/cpt: not in enabled drivers build config 00:01:24.442 common/dpaax: not in enabled drivers build config 00:01:24.442 common/iavf: not in enabled drivers build config 00:01:24.442 common/idpf: not in enabled drivers build config 00:01:24.442 common/ionic: not in enabled drivers build config 00:01:24.442 common/mvep: not in enabled drivers build config 00:01:24.442 common/octeontx: not in enabled drivers build config 00:01:24.442 bus/auxiliary: not in enabled drivers build config 00:01:24.442 bus/cdx: not in enabled drivers build config 00:01:24.442 bus/dpaa: not in enabled drivers build config 00:01:24.442 bus/fslmc: not in enabled drivers build config 00:01:24.442 bus/ifpga: not in enabled drivers build config 00:01:24.442 bus/platform: not in enabled drivers build config 00:01:24.442 bus/uacce: not in enabled drivers build config 00:01:24.442 bus/vmbus: not in enabled drivers build config 00:01:24.442 common/cnxk: not in enabled drivers build config 00:01:24.442 common/mlx5: not in enabled drivers build config 00:01:24.442 common/nfp: not in enabled drivers build config 00:01:24.442 common/nitrox: not in enabled drivers build config 00:01:24.442 common/qat: not in enabled drivers build config 00:01:24.442 common/sfc_efx: not in enabled drivers build config 00:01:24.442 mempool/bucket: not in enabled drivers build config 00:01:24.442 mempool/cnxk: not in enabled drivers build config 00:01:24.442 mempool/dpaa: not in enabled drivers build config 00:01:24.442 mempool/dpaa2: not in enabled drivers build config 00:01:24.442 mempool/octeontx: not in enabled drivers build config 00:01:24.442 mempool/stack: not in enabled drivers build config 00:01:24.442 dma/cnxk: not in enabled drivers build config 00:01:24.442 dma/dpaa: not in enabled drivers build config 00:01:24.442 dma/dpaa2: not in enabled drivers build config 00:01:24.442 dma/hisilicon: not in enabled drivers build config 00:01:24.442 dma/idxd: not in enabled drivers build config 00:01:24.442 dma/ioat: not in enabled drivers build config 00:01:24.442 dma/skeleton: not in enabled drivers build config 00:01:24.442 net/af_packet: not in enabled drivers build config 00:01:24.442 net/af_xdp: not in enabled drivers build config 00:01:24.442 net/ark: not in enabled drivers build config 00:01:24.442 net/atlantic: not in enabled drivers build config 00:01:24.442 net/avp: not in enabled drivers build config 00:01:24.442 net/axgbe: not in enabled drivers build config 00:01:24.442 net/bnx2x: not in enabled drivers build config 00:01:24.442 net/bnxt: not in enabled drivers build config 00:01:24.442 net/bonding: not in enabled drivers build config 00:01:24.442 net/cnxk: not in enabled drivers build config 00:01:24.442 net/cpfl: not in enabled drivers build config 00:01:24.442 net/cxgbe: not in enabled drivers build config 00:01:24.442 net/dpaa: not in enabled drivers build config 00:01:24.442 net/dpaa2: not in enabled drivers build config 00:01:24.442 net/e1000: not in enabled drivers build config 00:01:24.442 net/ena: not in enabled drivers build config 00:01:24.442 net/enetc: not in enabled drivers build config 00:01:24.442 net/enetfec: not in enabled drivers build config 00:01:24.442 net/enic: not in enabled drivers build config 00:01:24.442 net/failsafe: not in enabled drivers build config 00:01:24.442 net/fm10k: not in enabled drivers build config 00:01:24.442 net/gve: not in enabled drivers build config 00:01:24.442 net/hinic: not in enabled drivers build config 00:01:24.442 net/hns3: not in enabled drivers build config 00:01:24.442 net/i40e: not in enabled drivers build config 00:01:24.442 net/iavf: not in enabled drivers build config 00:01:24.442 net/ice: not in enabled drivers build config 00:01:24.442 net/idpf: not in enabled drivers build config 00:01:24.442 net/igc: not in enabled drivers build config 00:01:24.442 net/ionic: not in enabled drivers build config 00:01:24.442 net/ipn3ke: not in enabled drivers build config 00:01:24.442 net/ixgbe: not in enabled drivers build config 00:01:24.442 net/mana: not in enabled drivers build config 00:01:24.442 net/memif: not in enabled drivers build config 00:01:24.442 net/mlx4: not in enabled drivers build config 00:01:24.442 net/mlx5: not in enabled drivers build config 00:01:24.442 net/mvneta: not in enabled drivers build config 00:01:24.442 net/mvpp2: not in enabled drivers build config 00:01:24.442 net/netvsc: not in enabled drivers build config 00:01:24.442 net/nfb: not in enabled drivers build config 00:01:24.442 net/nfp: not in enabled drivers build config 00:01:24.442 net/ngbe: not in enabled drivers build config 00:01:24.442 net/null: not in enabled drivers build config 00:01:24.442 net/octeontx: not in enabled drivers build config 00:01:24.442 net/octeon_ep: not in enabled drivers build config 00:01:24.442 net/pcap: not in enabled drivers build config 00:01:24.442 net/pfe: not in enabled drivers build config 00:01:24.442 net/qede: not in enabled drivers build config 00:01:24.442 net/ring: not in enabled drivers build config 00:01:24.442 net/sfc: not in enabled drivers build config 00:01:24.442 net/softnic: not in enabled drivers build config 00:01:24.442 net/tap: not in enabled drivers build config 00:01:24.442 net/thunderx: not in enabled drivers build config 00:01:24.442 net/txgbe: not in enabled drivers build config 00:01:24.442 net/vdev_netvsc: not in enabled drivers build config 00:01:24.442 net/vhost: not in enabled drivers build config 00:01:24.442 net/virtio: not in enabled drivers build config 00:01:24.442 net/vmxnet3: not in enabled drivers build config 00:01:24.442 raw/*: missing internal dependency, "rawdev" 00:01:24.442 crypto/armv8: not in enabled drivers build config 00:01:24.442 crypto/bcmfs: not in enabled drivers build config 00:01:24.442 crypto/caam_jr: not in enabled drivers build config 00:01:24.442 crypto/ccp: not in enabled drivers build config 00:01:24.442 crypto/cnxk: not in enabled drivers build config 00:01:24.442 crypto/dpaa_sec: not in enabled drivers build config 00:01:24.442 crypto/dpaa2_sec: not in enabled drivers build config 00:01:24.442 crypto/ipsec_mb: not in enabled drivers build config 00:01:24.442 crypto/mlx5: not in enabled drivers build config 00:01:24.442 crypto/mvsam: not in enabled drivers build config 00:01:24.442 crypto/nitrox: not in enabled drivers build config 00:01:24.442 crypto/null: not in enabled drivers build config 00:01:24.442 crypto/octeontx: not in enabled drivers build config 00:01:24.442 crypto/openssl: not in enabled drivers build config 00:01:24.442 crypto/scheduler: not in enabled drivers build config 00:01:24.442 crypto/uadk: not in enabled drivers build config 00:01:24.442 crypto/virtio: not in enabled drivers build config 00:01:24.442 compress/isal: not in enabled drivers build config 00:01:24.442 compress/mlx5: not in enabled drivers build config 00:01:24.442 compress/nitrox: not in enabled drivers build config 00:01:24.442 compress/octeontx: not in enabled drivers build config 00:01:24.442 compress/zlib: not in enabled drivers build config 00:01:24.442 regex/*: missing internal dependency, "regexdev" 00:01:24.443 ml/*: missing internal dependency, "mldev" 00:01:24.443 vdpa/ifc: not in enabled drivers build config 00:01:24.443 vdpa/mlx5: not in enabled drivers build config 00:01:24.443 vdpa/nfp: not in enabled drivers build config 00:01:24.443 vdpa/sfc: not in enabled drivers build config 00:01:24.443 event/*: missing internal dependency, "eventdev" 00:01:24.443 baseband/*: missing internal dependency, "bbdev" 00:01:24.443 gpu/*: missing internal dependency, "gpudev" 00:01:24.443 00:01:24.443 00:01:24.443 Build targets in project: 84 00:01:24.443 00:01:24.443 DPDK 24.03.0 00:01:24.443 00:01:24.443 User defined options 00:01:24.443 buildtype : debug 00:01:24.443 default_library : shared 00:01:24.443 libdir : lib 00:01:24.443 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:24.443 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:24.443 c_link_args : 00:01:24.443 cpu_instruction_set: native 00:01:24.443 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:24.443 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:24.443 enable_docs : false 00:01:24.443 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:24.443 enable_kmods : false 00:01:24.443 max_lcores : 128 00:01:24.443 tests : false 00:01:24.443 00:01:24.443 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.443 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:24.443 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:24.443 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:24.443 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:24.443 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:24.443 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:24.443 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:24.443 [7/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:24.443 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:24.443 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:24.443 [10/267] Linking static target lib/librte_kvargs.a 00:01:24.443 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:24.443 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:24.443 [13/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:24.443 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:24.443 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:24.443 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:24.443 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:24.443 [18/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:24.443 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:24.443 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:24.443 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:24.443 [22/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:24.443 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:24.443 [24/267] Linking static target lib/librte_log.a 00:01:24.443 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:24.443 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:24.443 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:24.443 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:24.443 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:24.443 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:24.443 [31/267] Linking static target lib/librte_pci.a 00:01:24.443 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:24.443 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:24.702 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:24.702 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:24.702 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:24.702 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:24.702 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:24.702 [39/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:24.702 [40/267] Linking static target lib/librte_meter.a 00:01:24.702 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.702 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:24.702 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:24.702 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:24.702 [45/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:24.702 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:24.702 [47/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:24.702 [48/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:24.702 [49/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.702 [50/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:24.702 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:24.702 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:24.702 [53/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:24.702 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:24.702 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:24.703 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:24.962 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:24.962 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:24.962 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:24.962 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:24.962 [61/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:24.962 [62/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:24.962 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:24.962 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:24.962 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:24.962 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:24.962 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:24.962 [68/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:24.962 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:24.962 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:24.962 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:24.962 [72/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:24.962 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:24.962 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:24.962 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:24.962 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:24.962 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:24.962 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:24.962 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:24.962 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:24.962 [81/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:24.962 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:24.962 [83/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:24.962 [84/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:24.962 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:24.962 [86/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:24.962 [87/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:24.962 [88/267] Linking static target lib/librte_telemetry.a 00:01:24.962 [89/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:24.962 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:24.962 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:24.962 [92/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:24.962 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:24.962 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:24.962 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:24.962 [96/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:24.962 [97/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:24.962 [98/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:24.962 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:24.962 [100/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:24.962 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:24.962 [102/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:24.962 [103/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:24.962 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:24.962 [105/267] Linking static target lib/librte_timer.a 00:01:24.962 [106/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:24.962 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:24.962 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:24.962 [109/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:24.962 [110/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:24.962 [111/267] Linking static target lib/librte_cmdline.a 00:01:24.962 [112/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:24.962 [113/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:24.962 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:24.962 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:24.962 [116/267] Linking static target lib/librte_ring.a 00:01:24.962 [117/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:24.962 [118/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:24.962 [119/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:24.962 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:24.962 [121/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:24.962 [122/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:24.962 [123/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:24.962 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:24.962 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:24.962 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:24.962 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:24.962 [128/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:24.962 [129/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:24.962 [130/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:24.962 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:24.962 [132/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:24.962 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:24.962 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:24.962 [135/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:24.962 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:24.962 [137/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:24.962 [138/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:24.962 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:24.962 [140/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:24.962 [141/267] Linking static target lib/librte_mempool.a 00:01:24.962 [142/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:24.962 [143/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.962 [144/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:24.962 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:24.962 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:24.962 [147/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:24.962 [148/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:24.962 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:24.962 [150/267] Linking static target lib/librte_net.a 00:01:24.962 [151/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.962 [152/267] Linking static target lib/librte_power.a 00:01:24.962 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:24.962 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:24.962 [155/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:24.962 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:24.962 [157/267] Linking static target lib/librte_eal.a 00:01:24.962 [158/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:24.962 [159/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:24.962 [160/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:24.962 [161/267] Linking static target lib/librte_rcu.a 00:01:24.962 [162/267] Linking static target lib/librte_security.a 00:01:24.962 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:24.962 [164/267] Linking target lib/librte_log.so.24.1 00:01:25.222 [165/267] Linking static target lib/librte_dmadev.a 00:01:25.222 [166/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:25.222 [167/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:25.222 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:25.222 [169/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:25.222 [170/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:25.222 [171/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:25.222 [172/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.222 [173/267] Linking static target lib/librte_compressdev.a 00:01:25.222 [174/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.222 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:25.222 [176/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:25.222 [177/267] Linking static target drivers/librte_bus_vdev.a 00:01:25.222 [178/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:25.222 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:25.222 [180/267] Linking static target lib/librte_reorder.a 00:01:25.222 [181/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:25.222 [182/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:25.222 [183/267] Linking static target lib/librte_mbuf.a 00:01:25.222 [184/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:25.222 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:25.222 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:25.222 [187/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:25.222 [188/267] Linking static target lib/librte_hash.a 00:01:25.222 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:25.222 [190/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:25.222 [191/267] Linking target lib/librte_kvargs.so.24.1 00:01:25.222 [192/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:25.222 [193/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:25.222 [194/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.222 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:25.222 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:25.482 [197/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:25.482 [198/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:25.482 [199/267] Linking static target drivers/librte_bus_pci.a 00:01:25.482 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.482 [201/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:25.482 [202/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:25.482 [203/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.482 [204/267] Linking static target drivers/librte_mempool_ring.a 00:01:25.482 [205/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:25.482 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:25.482 [207/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.482 [208/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.482 [209/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:25.482 [210/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.482 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:25.482 [212/267] Linking static target lib/librte_cryptodev.a 00:01:25.482 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.742 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:25.742 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.742 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.742 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:26.001 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:26.001 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.001 [220/267] Linking static target lib/librte_ethdev.a 00:01:26.001 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.001 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.001 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.261 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.261 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.262 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.832 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:26.832 [228/267] Linking static target lib/librte_vhost.a 00:01:27.772 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.244 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.823 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.394 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.654 [233/267] Linking target lib/librte_eal.so.24.1 00:01:36.654 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:36.654 [235/267] Linking target lib/librte_ring.so.24.1 00:01:36.654 [236/267] Linking target lib/librte_meter.so.24.1 00:01:36.654 [237/267] Linking target lib/librte_pci.so.24.1 00:01:36.654 [238/267] Linking target lib/librte_timer.so.24.1 00:01:36.654 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:36.654 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:36.913 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:36.913 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:36.913 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:36.914 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:36.914 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:36.914 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:36.914 [247/267] Linking target lib/librte_mempool.so.24.1 00:01:36.914 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:37.173 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:37.173 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:37.173 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:37.173 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:37.173 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:37.173 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:01:37.173 [255/267] Linking target lib/librte_reorder.so.24.1 00:01:37.173 [256/267] Linking target lib/librte_compressdev.so.24.1 00:01:37.173 [257/267] Linking target lib/librte_net.so.24.1 00:01:37.433 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:37.433 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:37.433 [260/267] Linking target lib/librte_security.so.24.1 00:01:37.433 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:37.433 [262/267] Linking target lib/librte_hash.so.24.1 00:01:37.433 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:37.693 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:37.693 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:37.693 [266/267] Linking target lib/librte_power.so.24.1 00:01:37.693 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:37.693 INFO: autodetecting backend as ninja 00:01:37.693 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:40.998 CC lib/ut_mock/mock.o 00:01:40.998 CC lib/ut/ut.o 00:01:40.998 CC lib/log/log.o 00:01:40.998 CC lib/log/log_flags.o 00:01:40.998 CC lib/log/log_deprecated.o 00:01:41.259 LIB libspdk_ut_mock.a 00:01:41.259 LIB libspdk_ut.a 00:01:41.259 LIB libspdk_log.a 00:01:41.259 SO libspdk_ut_mock.so.6.0 00:01:41.259 SO libspdk_ut.so.2.0 00:01:41.259 SO libspdk_log.so.7.1 00:01:41.259 SYMLINK libspdk_ut_mock.so 00:01:41.259 SYMLINK libspdk_ut.so 00:01:41.520 SYMLINK libspdk_log.so 00:01:41.781 CC lib/dma/dma.o 00:01:41.781 CC lib/util/base64.o 00:01:41.781 CXX lib/trace_parser/trace.o 00:01:41.781 CC lib/util/bit_array.o 00:01:41.781 CC lib/util/cpuset.o 00:01:41.781 CC lib/util/crc16.o 00:01:41.781 CC lib/util/crc32.o 00:01:41.781 CC lib/util/crc32c.o 00:01:41.781 CC lib/util/crc32_ieee.o 00:01:41.781 CC lib/ioat/ioat.o 00:01:41.781 CC lib/util/crc64.o 00:01:41.781 CC lib/util/dif.o 00:01:41.781 CC lib/util/fd.o 00:01:41.781 CC lib/util/fd_group.o 00:01:41.781 CC lib/util/file.o 00:01:41.781 CC lib/util/hexlify.o 00:01:41.781 CC lib/util/iov.o 00:01:41.781 CC lib/util/pipe.o 00:01:41.781 CC lib/util/math.o 00:01:41.781 CC lib/util/net.o 00:01:41.781 CC lib/util/strerror_tls.o 00:01:41.781 CC lib/util/string.o 00:01:41.781 CC lib/util/uuid.o 00:01:41.781 CC lib/util/xor.o 00:01:41.781 CC lib/util/zipf.o 00:01:41.781 CC lib/util/md5.o 00:01:42.043 CC lib/vfio_user/host/vfio_user_pci.o 00:01:42.043 CC lib/vfio_user/host/vfio_user.o 00:01:42.043 LIB libspdk_dma.a 00:01:42.043 SO libspdk_dma.so.5.0 00:01:42.043 LIB libspdk_ioat.a 00:01:42.043 SO libspdk_ioat.so.7.0 00:01:42.043 SYMLINK libspdk_dma.so 00:01:42.043 SYMLINK libspdk_ioat.so 00:01:42.305 LIB libspdk_vfio_user.a 00:01:42.305 SO libspdk_vfio_user.so.5.0 00:01:42.305 LIB libspdk_util.a 00:01:42.305 SYMLINK libspdk_vfio_user.so 00:01:42.305 SO libspdk_util.so.10.1 00:01:42.566 SYMLINK libspdk_util.so 00:01:42.566 LIB libspdk_trace_parser.a 00:01:42.566 SO libspdk_trace_parser.so.6.0 00:01:42.828 SYMLINK libspdk_trace_parser.so 00:01:42.828 CC lib/json/json_parse.o 00:01:42.828 CC lib/json/json_util.o 00:01:42.828 CC lib/json/json_write.o 00:01:42.828 CC lib/vmd/vmd.o 00:01:42.828 CC lib/vmd/led.o 00:01:42.828 CC lib/conf/conf.o 00:01:42.828 CC lib/rdma_utils/rdma_utils.o 00:01:42.828 CC lib/env_dpdk/env.o 00:01:42.828 CC lib/idxd/idxd.o 00:01:42.828 CC lib/idxd/idxd_user.o 00:01:42.828 CC lib/env_dpdk/memory.o 00:01:42.828 CC lib/env_dpdk/pci.o 00:01:42.828 CC lib/idxd/idxd_kernel.o 00:01:42.828 CC lib/env_dpdk/init.o 00:01:42.828 CC lib/env_dpdk/threads.o 00:01:42.828 CC lib/env_dpdk/pci_ioat.o 00:01:42.828 CC lib/env_dpdk/pci_virtio.o 00:01:42.828 CC lib/env_dpdk/pci_vmd.o 00:01:42.828 CC lib/env_dpdk/pci_idxd.o 00:01:42.828 CC lib/env_dpdk/pci_event.o 00:01:42.828 CC lib/env_dpdk/sigbus_handler.o 00:01:42.828 CC lib/env_dpdk/pci_dpdk.o 00:01:42.828 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:42.828 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:43.089 LIB libspdk_conf.a 00:01:43.089 LIB libspdk_json.a 00:01:43.089 SO libspdk_conf.so.6.0 00:01:43.089 LIB libspdk_rdma_utils.a 00:01:43.089 SO libspdk_json.so.6.0 00:01:43.089 SO libspdk_rdma_utils.so.1.0 00:01:43.355 SYMLINK libspdk_conf.so 00:01:43.355 SYMLINK libspdk_json.so 00:01:43.355 SYMLINK libspdk_rdma_utils.so 00:01:43.355 LIB libspdk_idxd.a 00:01:43.355 LIB libspdk_vmd.a 00:01:43.355 SO libspdk_idxd.so.12.1 00:01:43.355 SO libspdk_vmd.so.6.0 00:01:43.621 SYMLINK libspdk_idxd.so 00:01:43.621 SYMLINK libspdk_vmd.so 00:01:43.621 CC lib/jsonrpc/jsonrpc_server.o 00:01:43.621 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:43.621 CC lib/jsonrpc/jsonrpc_client.o 00:01:43.621 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:43.621 CC lib/rdma_provider/common.o 00:01:43.621 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:43.882 LIB libspdk_rdma_provider.a 00:01:43.882 LIB libspdk_jsonrpc.a 00:01:43.882 SO libspdk_rdma_provider.so.7.0 00:01:43.882 SO libspdk_jsonrpc.so.6.0 00:01:43.882 SYMLINK libspdk_rdma_provider.so 00:01:43.882 SYMLINK libspdk_jsonrpc.so 00:01:44.143 LIB libspdk_env_dpdk.a 00:01:44.143 SO libspdk_env_dpdk.so.15.1 00:01:44.404 SYMLINK libspdk_env_dpdk.so 00:01:44.404 CC lib/rpc/rpc.o 00:01:44.665 LIB libspdk_rpc.a 00:01:44.665 SO libspdk_rpc.so.6.0 00:01:44.665 SYMLINK libspdk_rpc.so 00:01:44.927 CC lib/notify/notify.o 00:01:44.927 CC lib/trace/trace.o 00:01:44.927 CC lib/notify/notify_rpc.o 00:01:44.927 CC lib/keyring/keyring.o 00:01:44.927 CC lib/trace/trace_flags.o 00:01:44.927 CC lib/keyring/keyring_rpc.o 00:01:44.927 CC lib/trace/trace_rpc.o 00:01:45.189 LIB libspdk_notify.a 00:01:45.189 SO libspdk_notify.so.6.0 00:01:45.189 LIB libspdk_trace.a 00:01:45.189 LIB libspdk_keyring.a 00:01:45.450 SYMLINK libspdk_notify.so 00:01:45.450 SO libspdk_keyring.so.2.0 00:01:45.450 SO libspdk_trace.so.11.0 00:01:45.450 SYMLINK libspdk_keyring.so 00:01:45.450 SYMLINK libspdk_trace.so 00:01:45.712 CC lib/sock/sock.o 00:01:45.712 CC lib/sock/sock_rpc.o 00:01:45.712 CC lib/thread/thread.o 00:01:45.712 CC lib/thread/iobuf.o 00:01:46.285 LIB libspdk_sock.a 00:01:46.285 SO libspdk_sock.so.10.0 00:01:46.285 SYMLINK libspdk_sock.so 00:01:46.546 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:46.546 CC lib/nvme/nvme_ctrlr.o 00:01:46.546 CC lib/nvme/nvme_fabric.o 00:01:46.546 CC lib/nvme/nvme_ns_cmd.o 00:01:46.546 CC lib/nvme/nvme_ns.o 00:01:46.546 CC lib/nvme/nvme_pcie_common.o 00:01:46.546 CC lib/nvme/nvme_pcie.o 00:01:46.546 CC lib/nvme/nvme_qpair.o 00:01:46.546 CC lib/nvme/nvme.o 00:01:46.546 CC lib/nvme/nvme_quirks.o 00:01:46.546 CC lib/nvme/nvme_transport.o 00:01:46.546 CC lib/nvme/nvme_discovery.o 00:01:46.546 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:46.546 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:46.546 CC lib/nvme/nvme_tcp.o 00:01:46.546 CC lib/nvme/nvme_opal.o 00:01:46.546 CC lib/nvme/nvme_io_msg.o 00:01:46.546 CC lib/nvme/nvme_poll_group.o 00:01:46.546 CC lib/nvme/nvme_zns.o 00:01:46.546 CC lib/nvme/nvme_stubs.o 00:01:46.546 CC lib/nvme/nvme_auth.o 00:01:46.546 CC lib/nvme/nvme_cuse.o 00:01:46.546 CC lib/nvme/nvme_vfio_user.o 00:01:46.546 CC lib/nvme/nvme_rdma.o 00:01:47.118 LIB libspdk_thread.a 00:01:47.118 SO libspdk_thread.so.11.0 00:01:47.118 SYMLINK libspdk_thread.so 00:01:47.380 CC lib/fsdev/fsdev.o 00:01:47.380 CC lib/fsdev/fsdev_io.o 00:01:47.380 CC lib/fsdev/fsdev_rpc.o 00:01:47.380 CC lib/accel/accel.o 00:01:47.380 CC lib/accel/accel_rpc.o 00:01:47.380 CC lib/accel/accel_sw.o 00:01:47.380 CC lib/init/json_config.o 00:01:47.380 CC lib/init/subsystem.o 00:01:47.380 CC lib/init/subsystem_rpc.o 00:01:47.380 CC lib/blob/blobstore.o 00:01:47.380 CC lib/init/rpc.o 00:01:47.380 CC lib/blob/request.o 00:01:47.380 CC lib/virtio/virtio_vhost_user.o 00:01:47.380 CC lib/blob/zeroes.o 00:01:47.380 CC lib/virtio/virtio.o 00:01:47.380 CC lib/blob/blob_bs_dev.o 00:01:47.380 CC lib/virtio/virtio_vfio_user.o 00:01:47.380 CC lib/virtio/virtio_pci.o 00:01:47.642 CC lib/vfu_tgt/tgt_endpoint.o 00:01:47.642 CC lib/vfu_tgt/tgt_rpc.o 00:01:47.642 LIB libspdk_init.a 00:01:47.904 SO libspdk_init.so.6.0 00:01:47.904 LIB libspdk_virtio.a 00:01:47.904 LIB libspdk_vfu_tgt.a 00:01:47.904 SYMLINK libspdk_init.so 00:01:47.904 SO libspdk_vfu_tgt.so.3.0 00:01:47.904 SO libspdk_virtio.so.7.0 00:01:47.904 SYMLINK libspdk_vfu_tgt.so 00:01:47.904 SYMLINK libspdk_virtio.so 00:01:47.904 LIB libspdk_fsdev.a 00:01:48.165 SO libspdk_fsdev.so.2.0 00:01:48.165 SYMLINK libspdk_fsdev.so 00:01:48.165 CC lib/event/app.o 00:01:48.165 CC lib/event/reactor.o 00:01:48.165 CC lib/event/log_rpc.o 00:01:48.165 CC lib/event/app_rpc.o 00:01:48.165 CC lib/event/scheduler_static.o 00:01:48.428 LIB libspdk_accel.a 00:01:48.428 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:48.428 SO libspdk_accel.so.16.0 00:01:48.428 LIB libspdk_nvme.a 00:01:48.690 SYMLINK libspdk_accel.so 00:01:48.690 LIB libspdk_event.a 00:01:48.690 SO libspdk_event.so.14.0 00:01:48.690 SO libspdk_nvme.so.15.0 00:01:48.690 SYMLINK libspdk_event.so 00:01:48.951 SYMLINK libspdk_nvme.so 00:01:48.951 CC lib/bdev/bdev.o 00:01:48.951 CC lib/bdev/bdev_rpc.o 00:01:48.951 CC lib/bdev/bdev_zone.o 00:01:48.951 CC lib/bdev/part.o 00:01:48.951 CC lib/bdev/scsi_nvme.o 00:01:49.211 LIB libspdk_fuse_dispatcher.a 00:01:49.211 SO libspdk_fuse_dispatcher.so.1.0 00:01:49.211 SYMLINK libspdk_fuse_dispatcher.so 00:01:50.154 LIB libspdk_blob.a 00:01:50.154 SO libspdk_blob.so.12.0 00:01:50.414 SYMLINK libspdk_blob.so 00:01:50.675 CC lib/blobfs/blobfs.o 00:01:50.675 CC lib/blobfs/tree.o 00:01:50.675 CC lib/lvol/lvol.o 00:01:51.246 LIB libspdk_bdev.a 00:01:51.246 LIB libspdk_blobfs.a 00:01:51.246 SO libspdk_bdev.so.17.0 00:01:51.509 SO libspdk_blobfs.so.11.0 00:01:51.509 SYMLINK libspdk_bdev.so 00:01:51.509 SYMLINK libspdk_blobfs.so 00:01:51.509 LIB libspdk_lvol.a 00:01:51.509 SO libspdk_lvol.so.11.0 00:01:51.509 SYMLINK libspdk_lvol.so 00:01:51.771 CC lib/nbd/nbd.o 00:01:51.771 CC lib/nbd/nbd_rpc.o 00:01:51.771 CC lib/scsi/dev.o 00:01:51.771 CC lib/scsi/lun.o 00:01:51.771 CC lib/scsi/port.o 00:01:51.771 CC lib/scsi/scsi.o 00:01:51.771 CC lib/scsi/scsi_bdev.o 00:01:51.771 CC lib/ublk/ublk.o 00:01:51.771 CC lib/scsi/scsi_pr.o 00:01:51.771 CC lib/scsi/task.o 00:01:51.771 CC lib/ublk/ublk_rpc.o 00:01:51.771 CC lib/scsi/scsi_rpc.o 00:01:51.771 CC lib/nvmf/ctrlr.o 00:01:51.771 CC lib/ftl/ftl_core.o 00:01:51.771 CC lib/nvmf/ctrlr_discovery.o 00:01:51.771 CC lib/ftl/ftl_init.o 00:01:51.771 CC lib/nvmf/ctrlr_bdev.o 00:01:51.771 CC lib/ftl/ftl_layout.o 00:01:51.771 CC lib/nvmf/subsystem.o 00:01:51.771 CC lib/ftl/ftl_debug.o 00:01:51.771 CC lib/nvmf/nvmf.o 00:01:51.771 CC lib/ftl/ftl_io.o 00:01:51.771 CC lib/nvmf/nvmf_rpc.o 00:01:51.771 CC lib/nvmf/transport.o 00:01:51.771 CC lib/ftl/ftl_sb.o 00:01:51.771 CC lib/ftl/ftl_l2p_flat.o 00:01:51.771 CC lib/nvmf/tcp.o 00:01:51.771 CC lib/ftl/ftl_l2p.o 00:01:51.771 CC lib/nvmf/stubs.o 00:01:51.771 CC lib/nvmf/mdns_server.o 00:01:51.771 CC lib/ftl/ftl_nv_cache.o 00:01:51.771 CC lib/nvmf/vfio_user.o 00:01:51.771 CC lib/nvmf/rdma.o 00:01:51.771 CC lib/ftl/ftl_band.o 00:01:51.771 CC lib/nvmf/auth.o 00:01:51.771 CC lib/ftl/ftl_band_ops.o 00:01:51.771 CC lib/ftl/ftl_writer.o 00:01:51.771 CC lib/ftl/ftl_rq.o 00:01:51.771 CC lib/ftl/ftl_reloc.o 00:01:51.771 CC lib/ftl/ftl_l2p_cache.o 00:01:51.771 CC lib/ftl/ftl_p2l.o 00:01:51.771 CC lib/ftl/ftl_p2l_log.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:51.771 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:51.771 CC lib/ftl/utils/ftl_conf.o 00:01:51.771 CC lib/ftl/utils/ftl_md.o 00:01:51.771 CC lib/ftl/utils/ftl_mempool.o 00:01:51.771 CC lib/ftl/utils/ftl_bitmap.o 00:01:51.771 CC lib/ftl/utils/ftl_property.o 00:01:51.771 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:51.771 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:51.771 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:51.771 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:51.771 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:51.771 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:51.771 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:51.771 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:51.771 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:51.771 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:51.771 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:01:51.771 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:01:51.771 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:52.031 CC lib/ftl/base/ftl_base_bdev.o 00:01:52.031 CC lib/ftl/base/ftl_base_dev.o 00:01:52.031 CC lib/ftl/ftl_trace.o 00:01:52.291 LIB libspdk_nbd.a 00:01:52.291 SO libspdk_nbd.so.7.0 00:01:52.291 LIB libspdk_scsi.a 00:01:52.291 SYMLINK libspdk_nbd.so 00:01:52.552 SO libspdk_scsi.so.9.0 00:01:52.552 SYMLINK libspdk_scsi.so 00:01:52.552 LIB libspdk_ublk.a 00:01:52.552 SO libspdk_ublk.so.3.0 00:01:52.552 SYMLINK libspdk_ublk.so 00:01:52.812 CC lib/vhost/vhost.o 00:01:52.812 CC lib/vhost/vhost_rpc.o 00:01:52.812 CC lib/vhost/vhost_scsi.o 00:01:52.812 CC lib/vhost/vhost_blk.o 00:01:52.812 CC lib/vhost/rte_vhost_user.o 00:01:52.812 CC lib/iscsi/conn.o 00:01:52.812 CC lib/iscsi/init_grp.o 00:01:52.812 CC lib/iscsi/iscsi.o 00:01:52.812 CC lib/iscsi/param.o 00:01:52.812 CC lib/iscsi/portal_grp.o 00:01:52.812 CC lib/iscsi/tgt_node.o 00:01:52.812 CC lib/iscsi/iscsi_subsystem.o 00:01:52.812 CC lib/iscsi/iscsi_rpc.o 00:01:52.812 CC lib/iscsi/task.o 00:01:52.812 LIB libspdk_ftl.a 00:01:53.072 SO libspdk_ftl.so.9.0 00:01:53.334 SYMLINK libspdk_ftl.so 00:01:53.596 LIB libspdk_nvmf.a 00:01:53.858 SO libspdk_nvmf.so.20.0 00:01:53.858 LIB libspdk_vhost.a 00:01:53.858 SO libspdk_vhost.so.8.0 00:01:53.858 SYMLINK libspdk_nvmf.so 00:01:53.858 SYMLINK libspdk_vhost.so 00:01:54.119 LIB libspdk_iscsi.a 00:01:54.119 SO libspdk_iscsi.so.8.0 00:01:54.383 SYMLINK libspdk_iscsi.so 00:01:54.955 CC module/vfu_device/vfu_virtio.o 00:01:54.955 CC module/vfu_device/vfu_virtio_blk.o 00:01:54.955 CC module/vfu_device/vfu_virtio_scsi.o 00:01:54.955 CC module/vfu_device/vfu_virtio_fs.o 00:01:54.955 CC module/vfu_device/vfu_virtio_rpc.o 00:01:54.955 CC module/env_dpdk/env_dpdk_rpc.o 00:01:54.955 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:54.955 CC module/accel/iaa/accel_iaa.o 00:01:54.955 CC module/accel/iaa/accel_iaa_rpc.o 00:01:54.955 CC module/sock/posix/posix.o 00:01:54.955 CC module/accel/dsa/accel_dsa.o 00:01:54.955 LIB libspdk_env_dpdk_rpc.a 00:01:54.955 CC module/accel/dsa/accel_dsa_rpc.o 00:01:54.955 CC module/accel/ioat/accel_ioat.o 00:01:54.955 CC module/accel/error/accel_error.o 00:01:54.955 CC module/accel/ioat/accel_ioat_rpc.o 00:01:54.955 CC module/accel/error/accel_error_rpc.o 00:01:54.955 CC module/scheduler/gscheduler/gscheduler.o 00:01:54.955 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:54.955 CC module/blob/bdev/blob_bdev.o 00:01:54.955 CC module/keyring/file/keyring.o 00:01:54.955 CC module/fsdev/aio/fsdev_aio.o 00:01:54.955 CC module/fsdev/aio/fsdev_aio_rpc.o 00:01:54.955 CC module/keyring/file/keyring_rpc.o 00:01:54.955 CC module/fsdev/aio/linux_aio_mgr.o 00:01:54.955 CC module/keyring/linux/keyring.o 00:01:54.955 CC module/keyring/linux/keyring_rpc.o 00:01:54.955 SO libspdk_env_dpdk_rpc.so.6.0 00:01:55.216 SYMLINK libspdk_env_dpdk_rpc.so 00:01:55.216 LIB libspdk_keyring_linux.a 00:01:55.216 LIB libspdk_scheduler_dpdk_governor.a 00:01:55.216 LIB libspdk_scheduler_gscheduler.a 00:01:55.216 LIB libspdk_accel_ioat.a 00:01:55.216 LIB libspdk_keyring_file.a 00:01:55.216 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:55.216 SO libspdk_keyring_linux.so.1.0 00:01:55.216 LIB libspdk_accel_iaa.a 00:01:55.216 LIB libspdk_accel_error.a 00:01:55.216 SO libspdk_scheduler_gscheduler.so.4.0 00:01:55.216 LIB libspdk_scheduler_dynamic.a 00:01:55.216 SO libspdk_accel_ioat.so.6.0 00:01:55.216 SO libspdk_keyring_file.so.2.0 00:01:55.216 SO libspdk_accel_iaa.so.3.0 00:01:55.216 SO libspdk_accel_error.so.2.0 00:01:55.216 SO libspdk_scheduler_dynamic.so.4.0 00:01:55.216 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:55.216 LIB libspdk_blob_bdev.a 00:01:55.216 SYMLINK libspdk_keyring_linux.so 00:01:55.216 SYMLINK libspdk_scheduler_gscheduler.so 00:01:55.216 LIB libspdk_accel_dsa.a 00:01:55.216 SYMLINK libspdk_accel_ioat.so 00:01:55.477 SYMLINK libspdk_keyring_file.so 00:01:55.477 SO libspdk_blob_bdev.so.12.0 00:01:55.477 SO libspdk_accel_dsa.so.5.0 00:01:55.477 SYMLINK libspdk_accel_error.so 00:01:55.477 SYMLINK libspdk_scheduler_dynamic.so 00:01:55.477 SYMLINK libspdk_accel_iaa.so 00:01:55.477 LIB libspdk_vfu_device.a 00:01:55.477 SYMLINK libspdk_blob_bdev.so 00:01:55.477 SYMLINK libspdk_accel_dsa.so 00:01:55.477 SO libspdk_vfu_device.so.3.0 00:01:55.477 SYMLINK libspdk_vfu_device.so 00:01:55.739 LIB libspdk_fsdev_aio.a 00:01:55.739 SO libspdk_fsdev_aio.so.1.0 00:01:55.739 LIB libspdk_sock_posix.a 00:01:55.739 SO libspdk_sock_posix.so.6.0 00:01:55.739 SYMLINK libspdk_fsdev_aio.so 00:01:55.999 SYMLINK libspdk_sock_posix.so 00:01:55.999 CC module/bdev/gpt/vbdev_gpt.o 00:01:55.999 CC module/bdev/gpt/gpt.o 00:01:55.999 CC module/bdev/lvol/vbdev_lvol.o 00:01:55.999 CC module/bdev/null/bdev_null.o 00:01:55.999 CC module/bdev/error/vbdev_error.o 00:01:55.999 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:55.999 CC module/bdev/null/bdev_null_rpc.o 00:01:55.999 CC module/bdev/split/vbdev_split.o 00:01:55.999 CC module/bdev/error/vbdev_error_rpc.o 00:01:55.999 CC module/bdev/split/vbdev_split_rpc.o 00:01:55.999 CC module/bdev/delay/vbdev_delay.o 00:01:55.999 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:55.999 CC module/bdev/nvme/bdev_nvme.o 00:01:55.999 CC module/bdev/malloc/bdev_malloc.o 00:01:55.999 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:55.999 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:55.999 CC module/bdev/nvme/nvme_rpc.o 00:01:55.999 CC module/bdev/nvme/bdev_mdns_client.o 00:01:55.999 CC module/bdev/raid/bdev_raid.o 00:01:55.999 CC module/bdev/aio/bdev_aio.o 00:01:55.999 CC module/bdev/aio/bdev_aio_rpc.o 00:01:55.999 CC module/bdev/nvme/vbdev_opal.o 00:01:55.999 CC module/bdev/raid/bdev_raid_rpc.o 00:01:55.999 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:55.999 CC module/bdev/raid/bdev_raid_sb.o 00:01:55.999 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:55.999 CC module/bdev/raid/raid0.o 00:01:55.999 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:55.999 CC module/blobfs/bdev/blobfs_bdev.o 00:01:55.999 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:55.999 CC module/bdev/raid/raid1.o 00:01:55.999 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:55.999 CC module/bdev/raid/concat.o 00:01:55.999 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:55.999 CC module/bdev/iscsi/bdev_iscsi.o 00:01:55.999 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:55.999 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:55.999 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:55.999 CC module/bdev/passthru/vbdev_passthru.o 00:01:55.999 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:55.999 CC module/bdev/ftl/bdev_ftl.o 00:01:55.999 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:56.259 LIB libspdk_blobfs_bdev.a 00:01:56.259 LIB libspdk_bdev_error.a 00:01:56.259 LIB libspdk_bdev_split.a 00:01:56.259 LIB libspdk_bdev_null.a 00:01:56.259 LIB libspdk_bdev_gpt.a 00:01:56.259 SO libspdk_blobfs_bdev.so.6.0 00:01:56.259 SO libspdk_bdev_null.so.6.0 00:01:56.259 SO libspdk_bdev_error.so.6.0 00:01:56.259 SO libspdk_bdev_gpt.so.6.0 00:01:56.259 SO libspdk_bdev_split.so.6.0 00:01:56.259 SYMLINK libspdk_bdev_null.so 00:01:56.259 SYMLINK libspdk_blobfs_bdev.so 00:01:56.259 LIB libspdk_bdev_passthru.a 00:01:56.259 LIB libspdk_bdev_ftl.a 00:01:56.259 LIB libspdk_bdev_aio.a 00:01:56.259 LIB libspdk_bdev_delay.a 00:01:56.520 SYMLINK libspdk_bdev_error.so 00:01:56.520 SYMLINK libspdk_bdev_gpt.so 00:01:56.520 SYMLINK libspdk_bdev_split.so 00:01:56.520 LIB libspdk_bdev_zone_block.a 00:01:56.520 LIB libspdk_bdev_iscsi.a 00:01:56.520 SO libspdk_bdev_passthru.so.6.0 00:01:56.520 SO libspdk_bdev_ftl.so.6.0 00:01:56.520 LIB libspdk_bdev_malloc.a 00:01:56.520 SO libspdk_bdev_zone_block.so.6.0 00:01:56.520 SO libspdk_bdev_aio.so.6.0 00:01:56.520 SO libspdk_bdev_delay.so.6.0 00:01:56.520 SO libspdk_bdev_iscsi.so.6.0 00:01:56.520 SO libspdk_bdev_malloc.so.6.0 00:01:56.520 SYMLINK libspdk_bdev_passthru.so 00:01:56.520 SYMLINK libspdk_bdev_ftl.so 00:01:56.520 SYMLINK libspdk_bdev_zone_block.so 00:01:56.520 SYMLINK libspdk_bdev_aio.so 00:01:56.520 SYMLINK libspdk_bdev_delay.so 00:01:56.520 SYMLINK libspdk_bdev_iscsi.so 00:01:56.520 LIB libspdk_bdev_lvol.a 00:01:56.520 SYMLINK libspdk_bdev_malloc.so 00:01:56.520 LIB libspdk_bdev_virtio.a 00:01:56.520 SO libspdk_bdev_lvol.so.6.0 00:01:56.520 SO libspdk_bdev_virtio.so.6.0 00:01:56.520 SYMLINK libspdk_bdev_lvol.so 00:01:56.782 SYMLINK libspdk_bdev_virtio.so 00:01:57.042 LIB libspdk_bdev_raid.a 00:01:57.042 SO libspdk_bdev_raid.so.6.0 00:01:57.042 SYMLINK libspdk_bdev_raid.so 00:01:58.425 LIB libspdk_bdev_nvme.a 00:01:58.425 SO libspdk_bdev_nvme.so.7.1 00:01:58.425 SYMLINK libspdk_bdev_nvme.so 00:01:58.997 CC module/event/subsystems/scheduler/scheduler.o 00:01:59.258 CC module/event/subsystems/vmd/vmd.o 00:01:59.258 CC module/event/subsystems/keyring/keyring.o 00:01:59.258 CC module/event/subsystems/sock/sock.o 00:01:59.258 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:59.258 CC module/event/subsystems/iobuf/iobuf.o 00:01:59.258 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:59.258 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:59.258 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:59.258 CC module/event/subsystems/fsdev/fsdev.o 00:01:59.258 LIB libspdk_event_scheduler.a 00:01:59.258 LIB libspdk_event_keyring.a 00:01:59.258 LIB libspdk_event_fsdev.a 00:01:59.258 LIB libspdk_event_vfu_tgt.a 00:01:59.258 LIB libspdk_event_vmd.a 00:01:59.258 LIB libspdk_event_sock.a 00:01:59.258 LIB libspdk_event_vhost_blk.a 00:01:59.258 SO libspdk_event_keyring.so.1.0 00:01:59.258 SO libspdk_event_scheduler.so.4.0 00:01:59.258 LIB libspdk_event_iobuf.a 00:01:59.258 SO libspdk_event_sock.so.5.0 00:01:59.258 SO libspdk_event_vmd.so.6.0 00:01:59.258 SO libspdk_event_vfu_tgt.so.3.0 00:01:59.258 SO libspdk_event_fsdev.so.1.0 00:01:59.258 SO libspdk_event_vhost_blk.so.3.0 00:01:59.258 SO libspdk_event_iobuf.so.3.0 00:01:59.518 SYMLINK libspdk_event_keyring.so 00:01:59.518 SYMLINK libspdk_event_scheduler.so 00:01:59.518 SYMLINK libspdk_event_sock.so 00:01:59.518 SYMLINK libspdk_event_vfu_tgt.so 00:01:59.518 SYMLINK libspdk_event_vmd.so 00:01:59.518 SYMLINK libspdk_event_fsdev.so 00:01:59.518 SYMLINK libspdk_event_vhost_blk.so 00:01:59.518 SYMLINK libspdk_event_iobuf.so 00:01:59.779 CC module/event/subsystems/accel/accel.o 00:02:00.039 LIB libspdk_event_accel.a 00:02:00.039 SO libspdk_event_accel.so.6.0 00:02:00.039 SYMLINK libspdk_event_accel.so 00:02:00.298 CC module/event/subsystems/bdev/bdev.o 00:02:00.557 LIB libspdk_event_bdev.a 00:02:00.557 SO libspdk_event_bdev.so.6.0 00:02:00.818 SYMLINK libspdk_event_bdev.so 00:02:01.080 CC module/event/subsystems/ublk/ublk.o 00:02:01.080 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:01.080 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:01.080 CC module/event/subsystems/scsi/scsi.o 00:02:01.080 CC module/event/subsystems/nbd/nbd.o 00:02:01.080 LIB libspdk_event_ublk.a 00:02:01.340 LIB libspdk_event_nbd.a 00:02:01.340 SO libspdk_event_ublk.so.3.0 00:02:01.340 LIB libspdk_event_scsi.a 00:02:01.340 SO libspdk_event_nbd.so.6.0 00:02:01.340 SO libspdk_event_scsi.so.6.0 00:02:01.340 LIB libspdk_event_nvmf.a 00:02:01.340 SYMLINK libspdk_event_ublk.so 00:02:01.340 SO libspdk_event_nvmf.so.6.0 00:02:01.341 SYMLINK libspdk_event_nbd.so 00:02:01.341 SYMLINK libspdk_event_scsi.so 00:02:01.341 SYMLINK libspdk_event_nvmf.so 00:02:01.600 CC module/event/subsystems/iscsi/iscsi.o 00:02:01.600 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:01.861 LIB libspdk_event_vhost_scsi.a 00:02:01.861 LIB libspdk_event_iscsi.a 00:02:01.861 SO libspdk_event_iscsi.so.6.0 00:02:01.861 SO libspdk_event_vhost_scsi.so.3.0 00:02:01.861 SYMLINK libspdk_event_iscsi.so 00:02:01.861 SYMLINK libspdk_event_vhost_scsi.so 00:02:02.121 SO libspdk.so.6.0 00:02:02.121 SYMLINK libspdk.so 00:02:02.693 CC app/spdk_nvme_perf/perf.o 00:02:02.693 TEST_HEADER include/spdk/accel.h 00:02:02.693 CC test/rpc_client/rpc_client_test.o 00:02:02.693 CXX app/trace/trace.o 00:02:02.693 TEST_HEADER include/spdk/accel_module.h 00:02:02.693 TEST_HEADER include/spdk/assert.h 00:02:02.693 TEST_HEADER include/spdk/barrier.h 00:02:02.693 TEST_HEADER include/spdk/base64.h 00:02:02.693 TEST_HEADER include/spdk/bdev.h 00:02:02.693 TEST_HEADER include/spdk/bdev_module.h 00:02:02.693 CC app/trace_record/trace_record.o 00:02:02.693 CC app/spdk_nvme_discover/discovery_aer.o 00:02:02.693 TEST_HEADER include/spdk/bit_pool.h 00:02:02.693 TEST_HEADER include/spdk/bdev_zone.h 00:02:02.693 TEST_HEADER include/spdk/bit_array.h 00:02:02.693 CC app/spdk_lspci/spdk_lspci.o 00:02:02.693 TEST_HEADER include/spdk/blob_bdev.h 00:02:02.693 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:02.693 CC app/spdk_nvme_identify/identify.o 00:02:02.693 TEST_HEADER include/spdk/blobfs.h 00:02:02.693 TEST_HEADER include/spdk/blob.h 00:02:02.693 TEST_HEADER include/spdk/conf.h 00:02:02.693 TEST_HEADER include/spdk/config.h 00:02:02.693 CC app/spdk_top/spdk_top.o 00:02:02.693 TEST_HEADER include/spdk/cpuset.h 00:02:02.693 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:02.693 TEST_HEADER include/spdk/crc16.h 00:02:02.693 TEST_HEADER include/spdk/crc32.h 00:02:02.693 TEST_HEADER include/spdk/dif.h 00:02:02.693 TEST_HEADER include/spdk/crc64.h 00:02:02.694 TEST_HEADER include/spdk/dma.h 00:02:02.694 TEST_HEADER include/spdk/endian.h 00:02:02.694 TEST_HEADER include/spdk/env_dpdk.h 00:02:02.694 TEST_HEADER include/spdk/env.h 00:02:02.694 TEST_HEADER include/spdk/event.h 00:02:02.694 TEST_HEADER include/spdk/fd.h 00:02:02.694 TEST_HEADER include/spdk/file.h 00:02:02.694 TEST_HEADER include/spdk/fd_group.h 00:02:02.694 TEST_HEADER include/spdk/fsdev.h 00:02:02.694 TEST_HEADER include/spdk/fsdev_module.h 00:02:02.694 TEST_HEADER include/spdk/ftl.h 00:02:02.694 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:02.694 TEST_HEADER include/spdk/gpt_spec.h 00:02:02.694 TEST_HEADER include/spdk/hexlify.h 00:02:02.694 TEST_HEADER include/spdk/histogram_data.h 00:02:02.694 TEST_HEADER include/spdk/idxd.h 00:02:02.694 TEST_HEADER include/spdk/init.h 00:02:02.694 TEST_HEADER include/spdk/idxd_spec.h 00:02:02.694 TEST_HEADER include/spdk/ioat.h 00:02:02.694 TEST_HEADER include/spdk/ioat_spec.h 00:02:02.694 TEST_HEADER include/spdk/iscsi_spec.h 00:02:02.694 TEST_HEADER include/spdk/json.h 00:02:02.694 TEST_HEADER include/spdk/jsonrpc.h 00:02:02.694 TEST_HEADER include/spdk/keyring.h 00:02:02.694 TEST_HEADER include/spdk/likely.h 00:02:02.694 CC app/nvmf_tgt/nvmf_main.o 00:02:02.694 TEST_HEADER include/spdk/keyring_module.h 00:02:02.694 CC app/spdk_dd/spdk_dd.o 00:02:02.694 TEST_HEADER include/spdk/md5.h 00:02:02.694 TEST_HEADER include/spdk/log.h 00:02:02.694 CC app/iscsi_tgt/iscsi_tgt.o 00:02:02.694 TEST_HEADER include/spdk/lvol.h 00:02:02.694 TEST_HEADER include/spdk/memory.h 00:02:02.694 TEST_HEADER include/spdk/mmio.h 00:02:02.694 TEST_HEADER include/spdk/nbd.h 00:02:02.694 TEST_HEADER include/spdk/notify.h 00:02:02.694 TEST_HEADER include/spdk/net.h 00:02:02.694 TEST_HEADER include/spdk/nvme.h 00:02:02.694 TEST_HEADER include/spdk/nvme_intel.h 00:02:02.694 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:02.694 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:02.694 TEST_HEADER include/spdk/nvme_spec.h 00:02:02.694 TEST_HEADER include/spdk/nvme_zns.h 00:02:02.694 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:02.694 TEST_HEADER include/spdk/nvmf.h 00:02:02.694 CC app/spdk_tgt/spdk_tgt.o 00:02:02.694 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:02.694 TEST_HEADER include/spdk/nvmf_spec.h 00:02:02.694 TEST_HEADER include/spdk/opal.h 00:02:02.694 TEST_HEADER include/spdk/nvmf_transport.h 00:02:02.694 TEST_HEADER include/spdk/pci_ids.h 00:02:02.694 TEST_HEADER include/spdk/opal_spec.h 00:02:02.694 TEST_HEADER include/spdk/queue.h 00:02:02.694 TEST_HEADER include/spdk/reduce.h 00:02:02.694 TEST_HEADER include/spdk/pipe.h 00:02:02.694 TEST_HEADER include/spdk/scheduler.h 00:02:02.694 TEST_HEADER include/spdk/rpc.h 00:02:02.694 TEST_HEADER include/spdk/scsi.h 00:02:02.694 TEST_HEADER include/spdk/sock.h 00:02:02.694 TEST_HEADER include/spdk/scsi_spec.h 00:02:02.694 TEST_HEADER include/spdk/string.h 00:02:02.694 TEST_HEADER include/spdk/stdinc.h 00:02:02.694 TEST_HEADER include/spdk/trace.h 00:02:02.694 TEST_HEADER include/spdk/thread.h 00:02:02.694 TEST_HEADER include/spdk/trace_parser.h 00:02:02.694 TEST_HEADER include/spdk/tree.h 00:02:02.694 TEST_HEADER include/spdk/ublk.h 00:02:02.694 TEST_HEADER include/spdk/util.h 00:02:02.694 TEST_HEADER include/spdk/version.h 00:02:02.694 TEST_HEADER include/spdk/uuid.h 00:02:02.694 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:02.694 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:02.694 TEST_HEADER include/spdk/vhost.h 00:02:02.694 TEST_HEADER include/spdk/vmd.h 00:02:02.694 TEST_HEADER include/spdk/xor.h 00:02:02.694 TEST_HEADER include/spdk/zipf.h 00:02:02.694 CXX test/cpp_headers/accel.o 00:02:02.694 CXX test/cpp_headers/accel_module.o 00:02:02.694 CXX test/cpp_headers/assert.o 00:02:02.694 CXX test/cpp_headers/barrier.o 00:02:02.694 CXX test/cpp_headers/base64.o 00:02:02.694 CXX test/cpp_headers/bdev.o 00:02:02.694 CXX test/cpp_headers/bdev_module.o 00:02:02.694 CXX test/cpp_headers/bit_array.o 00:02:02.694 CXX test/cpp_headers/bdev_zone.o 00:02:02.694 CXX test/cpp_headers/bit_pool.o 00:02:02.694 CXX test/cpp_headers/blob_bdev.o 00:02:02.694 CXX test/cpp_headers/blobfs.o 00:02:02.694 CXX test/cpp_headers/blobfs_bdev.o 00:02:02.694 CXX test/cpp_headers/blob.o 00:02:02.694 CXX test/cpp_headers/cpuset.o 00:02:02.694 CXX test/cpp_headers/conf.o 00:02:02.694 CXX test/cpp_headers/config.o 00:02:02.694 CXX test/cpp_headers/crc16.o 00:02:02.694 CXX test/cpp_headers/crc32.o 00:02:02.694 CXX test/cpp_headers/crc64.o 00:02:02.694 CXX test/cpp_headers/dif.o 00:02:02.694 CXX test/cpp_headers/dma.o 00:02:02.694 CXX test/cpp_headers/endian.o 00:02:02.694 CXX test/cpp_headers/env_dpdk.o 00:02:02.694 CXX test/cpp_headers/env.o 00:02:02.694 CXX test/cpp_headers/fd.o 00:02:02.694 CXX test/cpp_headers/event.o 00:02:02.694 CXX test/cpp_headers/fd_group.o 00:02:02.694 CXX test/cpp_headers/file.o 00:02:02.694 CXX test/cpp_headers/fsdev_module.o 00:02:02.694 CXX test/cpp_headers/fsdev.o 00:02:02.694 CXX test/cpp_headers/fuse_dispatcher.o 00:02:02.694 CXX test/cpp_headers/ftl.o 00:02:02.694 CXX test/cpp_headers/hexlify.o 00:02:02.694 CXX test/cpp_headers/gpt_spec.o 00:02:02.694 CXX test/cpp_headers/histogram_data.o 00:02:02.694 CXX test/cpp_headers/idxd_spec.o 00:02:02.694 CXX test/cpp_headers/init.o 00:02:02.694 CXX test/cpp_headers/idxd.o 00:02:02.694 CXX test/cpp_headers/ioat_spec.o 00:02:02.694 CXX test/cpp_headers/ioat.o 00:02:02.694 CXX test/cpp_headers/iscsi_spec.o 00:02:02.694 CXX test/cpp_headers/jsonrpc.o 00:02:02.694 CXX test/cpp_headers/keyring.o 00:02:02.694 CXX test/cpp_headers/json.o 00:02:02.694 CXX test/cpp_headers/keyring_module.o 00:02:02.694 CXX test/cpp_headers/log.o 00:02:02.694 CXX test/cpp_headers/likely.o 00:02:02.694 CXX test/cpp_headers/mmio.o 00:02:02.694 CXX test/cpp_headers/md5.o 00:02:02.694 CXX test/cpp_headers/memory.o 00:02:02.694 CXX test/cpp_headers/net.o 00:02:02.694 CXX test/cpp_headers/lvol.o 00:02:02.694 CXX test/cpp_headers/nvme.o 00:02:02.694 CXX test/cpp_headers/notify.o 00:02:02.694 CXX test/cpp_headers/nbd.o 00:02:02.694 CXX test/cpp_headers/nvme_intel.o 00:02:02.694 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:02.694 CXX test/cpp_headers/nvme_spec.o 00:02:02.694 CXX test/cpp_headers/nvme_ocssd.o 00:02:02.695 CC examples/util/zipf/zipf.o 00:02:02.695 CXX test/cpp_headers/nvmf_cmd.o 00:02:02.695 CXX test/cpp_headers/nvme_zns.o 00:02:02.695 CC examples/ioat/perf/perf.o 00:02:02.695 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:02.695 CXX test/cpp_headers/opal_spec.o 00:02:02.695 CXX test/cpp_headers/nvmf.o 00:02:02.695 CXX test/cpp_headers/nvmf_spec.o 00:02:02.695 CXX test/cpp_headers/opal.o 00:02:02.695 CXX test/cpp_headers/nvmf_transport.o 00:02:02.695 CXX test/cpp_headers/pipe.o 00:02:02.695 CC test/env/pci/pci_ut.o 00:02:02.695 CC test/app/histogram_perf/histogram_perf.o 00:02:02.695 CXX test/cpp_headers/reduce.o 00:02:02.695 CC test/env/vtophys/vtophys.o 00:02:02.695 CXX test/cpp_headers/queue.o 00:02:02.695 CXX test/cpp_headers/pci_ids.o 00:02:02.695 CXX test/cpp_headers/rpc.o 00:02:02.959 CXX test/cpp_headers/scheduler.o 00:02:02.959 CC examples/ioat/verify/verify.o 00:02:02.959 CXX test/cpp_headers/scsi_spec.o 00:02:02.959 CXX test/cpp_headers/scsi.o 00:02:02.959 CXX test/cpp_headers/sock.o 00:02:02.959 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:02.959 CXX test/cpp_headers/stdinc.o 00:02:02.959 CXX test/cpp_headers/string.o 00:02:02.959 LINK spdk_lspci 00:02:02.959 CXX test/cpp_headers/thread.o 00:02:02.959 CC test/env/memory/memory_ut.o 00:02:02.959 CXX test/cpp_headers/trace.o 00:02:02.959 CXX test/cpp_headers/trace_parser.o 00:02:02.959 CC test/thread/poller_perf/poller_perf.o 00:02:02.959 CC test/app/jsoncat/jsoncat.o 00:02:02.959 CXX test/cpp_headers/tree.o 00:02:02.959 CXX test/cpp_headers/ublk.o 00:02:02.959 CXX test/cpp_headers/util.o 00:02:02.959 CXX test/cpp_headers/uuid.o 00:02:02.959 CXX test/cpp_headers/vfio_user_pci.o 00:02:02.959 CXX test/cpp_headers/version.o 00:02:02.959 CXX test/cpp_headers/vmd.o 00:02:02.959 CXX test/cpp_headers/vfio_user_spec.o 00:02:02.959 CXX test/cpp_headers/vhost.o 00:02:02.959 CXX test/cpp_headers/xor.o 00:02:02.959 CXX test/cpp_headers/zipf.o 00:02:02.959 CC test/app/stub/stub.o 00:02:02.959 CC test/dma/test_dma/test_dma.o 00:02:02.959 CC app/fio/nvme/fio_plugin.o 00:02:02.959 CC test/app/bdev_svc/bdev_svc.o 00:02:02.959 LINK rpc_client_test 00:02:02.959 CC app/fio/bdev/fio_plugin.o 00:02:02.959 LINK spdk_nvme_discover 00:02:02.959 LINK interrupt_tgt 00:02:03.218 LINK nvmf_tgt 00:02:03.218 LINK spdk_trace_record 00:02:03.218 LINK iscsi_tgt 00:02:03.218 CC test/env/mem_callbacks/mem_callbacks.o 00:02:03.218 LINK spdk_tgt 00:02:03.218 LINK vtophys 00:02:03.218 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:03.218 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:03.218 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:03.218 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:03.478 LINK jsoncat 00:02:03.478 LINK histogram_perf 00:02:03.478 LINK poller_perf 00:02:03.478 LINK verify 00:02:03.478 LINK zipf 00:02:03.478 LINK spdk_dd 00:02:03.478 LINK bdev_svc 00:02:03.478 LINK stub 00:02:03.478 LINK env_dpdk_post_init 00:02:03.478 LINK ioat_perf 00:02:03.740 LINK spdk_trace 00:02:03.740 LINK spdk_nvme_perf 00:02:03.740 LINK pci_ut 00:02:03.740 LINK spdk_bdev 00:02:03.740 LINK spdk_nvme 00:02:03.740 LINK vhost_fuzz 00:02:03.740 LINK nvme_fuzz 00:02:04.001 LINK test_dma 00:02:04.001 CC test/event/reactor_perf/reactor_perf.o 00:02:04.001 CC test/event/reactor/reactor.o 00:02:04.001 CC test/event/event_perf/event_perf.o 00:02:04.001 CC examples/sock/hello_world/hello_sock.o 00:02:04.001 CC examples/idxd/perf/perf.o 00:02:04.001 CC examples/vmd/led/led.o 00:02:04.001 CC test/event/app_repeat/app_repeat.o 00:02:04.001 CC examples/vmd/lsvmd/lsvmd.o 00:02:04.001 CC test/event/scheduler/scheduler.o 00:02:04.001 LINK spdk_top 00:02:04.001 CC app/vhost/vhost.o 00:02:04.001 LINK spdk_nvme_identify 00:02:04.001 LINK mem_callbacks 00:02:04.001 CC examples/thread/thread/thread_ex.o 00:02:04.001 LINK reactor 00:02:04.261 LINK reactor_perf 00:02:04.261 LINK event_perf 00:02:04.261 LINK lsvmd 00:02:04.261 LINK led 00:02:04.261 LINK app_repeat 00:02:04.261 LINK hello_sock 00:02:04.261 LINK vhost 00:02:04.261 LINK scheduler 00:02:04.261 LINK idxd_perf 00:02:04.261 LINK thread 00:02:04.261 LINK memory_ut 00:02:04.522 CC test/nvme/aer/aer.o 00:02:04.522 CC test/nvme/e2edp/nvme_dp.o 00:02:04.522 CC test/nvme/fused_ordering/fused_ordering.o 00:02:04.522 CC test/nvme/boot_partition/boot_partition.o 00:02:04.522 CC test/nvme/startup/startup.o 00:02:04.522 CC test/nvme/reset/reset.o 00:02:04.522 CC test/nvme/sgl/sgl.o 00:02:04.522 CC test/nvme/fdp/fdp.o 00:02:04.522 CC test/nvme/overhead/overhead.o 00:02:04.522 CC test/nvme/cuse/cuse.o 00:02:04.522 CC test/nvme/err_injection/err_injection.o 00:02:04.522 CC test/blobfs/mkfs/mkfs.o 00:02:04.522 CC test/nvme/simple_copy/simple_copy.o 00:02:04.522 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:04.522 CC test/nvme/reserve/reserve.o 00:02:04.522 CC test/nvme/connect_stress/connect_stress.o 00:02:04.522 CC test/nvme/compliance/nvme_compliance.o 00:02:04.522 CC test/accel/dif/dif.o 00:02:04.784 CC test/lvol/esnap/esnap.o 00:02:04.784 LINK startup 00:02:04.784 LINK boot_partition 00:02:04.784 CC examples/nvme/arbitration/arbitration.o 00:02:04.784 LINK err_injection 00:02:04.784 CC examples/nvme/hello_world/hello_world.o 00:02:04.784 CC examples/nvme/reconnect/reconnect.o 00:02:04.784 LINK fused_ordering 00:02:04.784 LINK connect_stress 00:02:04.784 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:04.784 CC examples/nvme/hotplug/hotplug.o 00:02:04.784 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:04.784 LINK simple_copy 00:02:04.784 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:04.784 CC examples/nvme/abort/abort.o 00:02:04.784 LINK doorbell_aers 00:02:04.784 LINK mkfs 00:02:04.784 LINK reserve 00:02:04.784 LINK aer 00:02:04.784 LINK nvme_dp 00:02:04.784 LINK sgl 00:02:04.784 LINK overhead 00:02:04.784 LINK reset 00:02:04.784 LINK nvme_compliance 00:02:04.784 LINK fdp 00:02:04.784 CC examples/accel/perf/accel_perf.o 00:02:04.784 CC examples/blob/hello_world/hello_blob.o 00:02:05.045 CC examples/blob/cli/blobcli.o 00:02:05.046 LINK cmb_copy 00:02:05.046 LINK iscsi_fuzz 00:02:05.046 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:05.046 LINK pmr_persistence 00:02:05.046 LINK hello_world 00:02:05.046 LINK hotplug 00:02:05.046 LINK arbitration 00:02:05.046 LINK reconnect 00:02:05.046 LINK hello_blob 00:02:05.046 LINK abort 00:02:05.307 LINK dif 00:02:05.307 LINK nvme_manage 00:02:05.307 LINK hello_fsdev 00:02:05.308 LINK accel_perf 00:02:05.308 LINK blobcli 00:02:05.570 LINK cuse 00:02:05.831 CC test/bdev/bdevio/bdevio.o 00:02:05.831 CC examples/bdev/bdevperf/bdevperf.o 00:02:05.831 CC examples/bdev/hello_world/hello_bdev.o 00:02:06.092 LINK bdevio 00:02:06.353 LINK hello_bdev 00:02:06.614 LINK bdevperf 00:02:07.558 CC examples/nvmf/nvmf/nvmf.o 00:02:07.558 LINK nvmf 00:02:09.473 LINK esnap 00:02:09.473 00:02:09.473 real 0m54.636s 00:02:09.473 user 7m48.565s 00:02:09.473 sys 4m25.880s 00:02:09.473 11:01:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:09.473 11:01:15 make -- common/autotest_common.sh@10 -- $ set +x 00:02:09.473 ************************************ 00:02:09.473 END TEST make 00:02:09.473 ************************************ 00:02:09.473 11:01:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:09.473 11:01:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:09.473 11:01:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:09.473 11:01:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.473 11:01:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:09.473 11:01:15 -- pm/common@44 -- $ pid=3080001 00:02:09.473 11:01:15 -- pm/common@50 -- $ kill -TERM 3080001 00:02:09.473 11:01:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.473 11:01:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:09.473 11:01:15 -- pm/common@44 -- $ pid=3080002 00:02:09.473 11:01:15 -- pm/common@50 -- $ kill -TERM 3080002 00:02:09.473 11:01:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.473 11:01:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:09.473 11:01:15 -- pm/common@44 -- $ pid=3080004 00:02:09.473 11:01:15 -- pm/common@50 -- $ kill -TERM 3080004 00:02:09.473 11:01:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.473 11:01:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:09.473 11:01:15 -- pm/common@44 -- $ pid=3080030 00:02:09.474 11:01:15 -- pm/common@50 -- $ sudo -E kill -TERM 3080030 00:02:09.474 11:01:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:09.474 11:01:15 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:09.735 11:01:15 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:09.735 11:01:15 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:09.735 11:01:15 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:09.735 11:01:15 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:09.735 11:01:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:09.735 11:01:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:09.735 11:01:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:09.735 11:01:15 -- scripts/common.sh@336 -- # IFS=.-: 00:02:09.735 11:01:15 -- scripts/common.sh@336 -- # read -ra ver1 00:02:09.735 11:01:15 -- scripts/common.sh@337 -- # IFS=.-: 00:02:09.735 11:01:15 -- scripts/common.sh@337 -- # read -ra ver2 00:02:09.735 11:01:15 -- scripts/common.sh@338 -- # local 'op=<' 00:02:09.735 11:01:15 -- scripts/common.sh@340 -- # ver1_l=2 00:02:09.735 11:01:15 -- scripts/common.sh@341 -- # ver2_l=1 00:02:09.735 11:01:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:09.735 11:01:15 -- scripts/common.sh@344 -- # case "$op" in 00:02:09.735 11:01:15 -- scripts/common.sh@345 -- # : 1 00:02:09.735 11:01:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:09.735 11:01:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.735 11:01:15 -- scripts/common.sh@365 -- # decimal 1 00:02:09.735 11:01:15 -- scripts/common.sh@353 -- # local d=1 00:02:09.735 11:01:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:09.735 11:01:15 -- scripts/common.sh@355 -- # echo 1 00:02:09.735 11:01:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:09.735 11:01:15 -- scripts/common.sh@366 -- # decimal 2 00:02:09.735 11:01:15 -- scripts/common.sh@353 -- # local d=2 00:02:09.735 11:01:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:09.735 11:01:15 -- scripts/common.sh@355 -- # echo 2 00:02:09.735 11:01:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:09.735 11:01:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:09.735 11:01:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:09.735 11:01:15 -- scripts/common.sh@368 -- # return 0 00:02:09.735 11:01:15 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:09.735 11:01:15 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:09.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:09.735 --rc genhtml_branch_coverage=1 00:02:09.735 --rc genhtml_function_coverage=1 00:02:09.735 --rc genhtml_legend=1 00:02:09.735 --rc geninfo_all_blocks=1 00:02:09.735 --rc geninfo_unexecuted_blocks=1 00:02:09.735 00:02:09.735 ' 00:02:09.735 11:01:15 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:09.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:09.735 --rc genhtml_branch_coverage=1 00:02:09.735 --rc genhtml_function_coverage=1 00:02:09.735 --rc genhtml_legend=1 00:02:09.735 --rc geninfo_all_blocks=1 00:02:09.735 --rc geninfo_unexecuted_blocks=1 00:02:09.735 00:02:09.735 ' 00:02:09.735 11:01:15 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:09.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:09.735 --rc genhtml_branch_coverage=1 00:02:09.735 --rc genhtml_function_coverage=1 00:02:09.735 --rc genhtml_legend=1 00:02:09.735 --rc geninfo_all_blocks=1 00:02:09.735 --rc geninfo_unexecuted_blocks=1 00:02:09.735 00:02:09.735 ' 00:02:09.735 11:01:15 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:09.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:09.735 --rc genhtml_branch_coverage=1 00:02:09.735 --rc genhtml_function_coverage=1 00:02:09.735 --rc genhtml_legend=1 00:02:09.735 --rc geninfo_all_blocks=1 00:02:09.735 --rc geninfo_unexecuted_blocks=1 00:02:09.735 00:02:09.735 ' 00:02:09.735 11:01:15 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:09.735 11:01:15 -- nvmf/common.sh@7 -- # uname -s 00:02:09.735 11:01:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:09.735 11:01:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:09.735 11:01:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:09.735 11:01:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:09.735 11:01:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:09.735 11:01:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:09.735 11:01:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:09.735 11:01:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:09.735 11:01:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:09.735 11:01:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:09.735 11:01:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:09.735 11:01:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:09.735 11:01:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:09.735 11:01:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:09.735 11:01:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:09.735 11:01:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:09.735 11:01:15 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:09.735 11:01:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:09.735 11:01:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:09.735 11:01:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.735 11:01:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.735 11:01:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.735 11:01:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.735 11:01:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.735 11:01:15 -- paths/export.sh@5 -- # export PATH 00:02:09.735 11:01:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.735 11:01:15 -- nvmf/common.sh@51 -- # : 0 00:02:09.735 11:01:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:09.735 11:01:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:09.735 11:01:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:09.735 11:01:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:09.735 11:01:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:09.735 11:01:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:09.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:09.735 11:01:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:09.735 11:01:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:09.735 11:01:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:09.735 11:01:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:09.735 11:01:15 -- spdk/autotest.sh@32 -- # uname -s 00:02:09.735 11:01:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:09.735 11:01:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:09.735 11:01:15 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:09.735 11:01:15 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:09.735 11:01:15 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:09.735 11:01:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:09.735 11:01:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:09.735 11:01:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:09.735 11:01:15 -- spdk/autotest.sh@48 -- # udevadm_pid=3145727 00:02:09.735 11:01:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:09.735 11:01:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:09.735 11:01:15 -- pm/common@17 -- # local monitor 00:02:09.735 11:01:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.735 11:01:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.735 11:01:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.735 11:01:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.735 11:01:15 -- pm/common@21 -- # date +%s 00:02:09.735 11:01:15 -- pm/common@21 -- # date +%s 00:02:09.735 11:01:15 -- pm/common@25 -- # sleep 1 00:02:09.735 11:01:15 -- pm/common@21 -- # date +%s 00:02:09.735 11:01:15 -- pm/common@21 -- # date +%s 00:02:09.735 11:01:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733479275 00:02:09.736 11:01:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733479275 00:02:09.736 11:01:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733479275 00:02:09.736 11:01:15 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733479275 00:02:09.996 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733479275_collect-vmstat.pm.log 00:02:09.996 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733479275_collect-cpu-load.pm.log 00:02:09.996 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733479275_collect-cpu-temp.pm.log 00:02:09.996 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733479275_collect-bmc-pm.bmc.pm.log 00:02:10.938 11:01:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:10.938 11:01:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:10.938 11:01:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:10.938 11:01:16 -- common/autotest_common.sh@10 -- # set +x 00:02:10.938 11:01:16 -- spdk/autotest.sh@59 -- # create_test_list 00:02:10.938 11:01:16 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:10.938 11:01:16 -- common/autotest_common.sh@10 -- # set +x 00:02:10.938 11:01:16 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:10.938 11:01:16 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.938 11:01:16 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.938 11:01:16 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:10.938 11:01:16 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.938 11:01:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:10.938 11:01:16 -- common/autotest_common.sh@1457 -- # uname 00:02:10.938 11:01:16 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:10.938 11:01:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:10.938 11:01:16 -- common/autotest_common.sh@1477 -- # uname 00:02:10.938 11:01:16 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:10.938 11:01:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:10.938 11:01:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:10.938 lcov: LCOV version 1.15 00:02:10.938 11:01:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:25.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:25.867 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:44.125 11:01:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:44.125 11:01:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:44.125 11:01:47 -- common/autotest_common.sh@10 -- # set +x 00:02:44.125 11:01:47 -- spdk/autotest.sh@78 -- # rm -f 00:02:44.125 11:01:47 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.066 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:45.066 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:45.066 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:45.327 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:45.327 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:45.327 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:45.588 11:01:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:45.588 11:01:51 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:45.588 11:01:51 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:45.588 11:01:51 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:02:45.588 11:01:51 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:02:45.588 11:01:51 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:02:45.588 11:01:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:02:45.588 11:01:51 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:02:45.588 11:01:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:02:45.588 11:01:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:45.588 11:01:51 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:45.588 11:01:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:45.588 11:01:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:45.588 11:01:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:45.588 11:01:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:45.588 11:01:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:45.588 11:01:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:45.588 11:01:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:45.588 11:01:51 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:45.588 No valid GPT data, bailing 00:02:45.588 11:01:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:45.588 11:01:51 -- scripts/common.sh@394 -- # pt= 00:02:45.588 11:01:51 -- scripts/common.sh@395 -- # return 1 00:02:45.588 11:01:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:45.588 1+0 records in 00:02:45.588 1+0 records out 00:02:45.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00395249 s, 265 MB/s 00:02:45.588 11:01:51 -- spdk/autotest.sh@105 -- # sync 00:02:45.588 11:01:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:45.588 11:01:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:45.588 11:01:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:55.597 11:02:00 -- spdk/autotest.sh@111 -- # uname -s 00:02:55.597 11:02:00 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:55.597 11:02:00 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:55.597 11:02:00 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:58.143 Hugepages 00:02:58.143 node hugesize free / total 00:02:58.143 node0 1048576kB 0 / 0 00:02:58.143 node0 2048kB 0 / 0 00:02:58.143 node1 1048576kB 0 / 0 00:02:58.143 node1 2048kB 0 / 0 00:02:58.143 00:02:58.143 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.143 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:58.143 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:58.143 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:58.143 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:58.143 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:58.143 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:58.143 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:58.143 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:58.143 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:58.143 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:58.143 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:58.143 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:58.143 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:58.143 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:58.143 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:58.143 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:58.143 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:58.143 11:02:04 -- spdk/autotest.sh@117 -- # uname -s 00:02:58.143 11:02:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:58.143 11:02:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:58.143 11:02:04 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.355 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:02.355 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:03.739 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:03.999 11:02:10 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:05.384 11:02:11 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:05.384 11:02:11 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:05.384 11:02:11 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:05.384 11:02:11 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:05.384 11:02:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:05.384 11:02:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:05.384 11:02:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:05.384 11:02:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:05.384 11:02:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:05.384 11:02:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:05.384 11:02:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:05.384 11:02:11 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.592 Waiting for block devices as requested 00:03:09.592 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:09.592 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:09.592 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:09.592 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:09.592 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:09.592 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:09.592 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:09.592 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:09.592 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:09.853 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:09.853 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:09.853 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:09.853 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:10.115 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:10.115 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:10.115 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:10.115 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:10.688 11:02:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:10.688 11:02:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:10.688 11:02:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:10.688 11:02:16 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:10.688 11:02:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:10.688 11:02:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:10.688 11:02:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:10.688 11:02:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:10.688 11:02:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:10.688 11:02:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:10.688 11:02:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:10.688 11:02:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:10.688 11:02:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:10.688 11:02:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:10.688 11:02:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:10.688 11:02:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:10.688 11:02:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:10.688 11:02:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:10.688 11:02:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:10.688 11:02:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:10.688 11:02:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:10.688 11:02:16 -- common/autotest_common.sh@1543 -- # continue 00:03:10.688 11:02:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:10.688 11:02:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:10.688 11:02:16 -- common/autotest_common.sh@10 -- # set +x 00:03:10.688 11:02:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:10.688 11:02:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:10.688 11:02:16 -- common/autotest_common.sh@10 -- # set +x 00:03:10.688 11:02:16 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.902 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:14.902 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:14.902 11:02:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:14.902 11:02:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:14.902 11:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:15.164 11:02:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:15.164 11:02:21 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:15.164 11:02:21 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:15.164 11:02:21 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:15.164 11:02:21 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:15.164 11:02:21 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:15.164 11:02:21 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:15.164 11:02:21 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:15.164 11:02:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:15.164 11:02:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:15.164 11:02:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:15.164 11:02:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:15.164 11:02:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:15.164 11:02:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:15.164 11:02:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:15.164 11:02:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:15.164 11:02:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:15.164 11:02:21 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:15.164 11:02:21 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:15.164 11:02:21 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:15.164 11:02:21 -- common/autotest_common.sh@1572 -- # return 0 00:03:15.164 11:02:21 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:15.164 11:02:21 -- common/autotest_common.sh@1580 -- # return 0 00:03:15.164 11:02:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:15.164 11:02:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:15.164 11:02:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:15.164 11:02:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:15.164 11:02:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:15.164 11:02:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:15.164 11:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:15.164 11:02:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:15.164 11:02:21 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:15.164 11:02:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:15.164 11:02:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:15.164 11:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:15.164 ************************************ 00:03:15.164 START TEST env 00:03:15.164 ************************************ 00:03:15.164 11:02:21 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:15.164 * Looking for test storage... 00:03:15.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:15.164 11:02:21 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:15.164 11:02:21 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:15.164 11:02:21 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:15.426 11:02:21 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:15.426 11:02:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.426 11:02:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.426 11:02:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.426 11:02:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.426 11:02:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.426 11:02:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.426 11:02:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.426 11:02:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.426 11:02:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.426 11:02:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.426 11:02:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.427 11:02:21 env -- scripts/common.sh@344 -- # case "$op" in 00:03:15.427 11:02:21 env -- scripts/common.sh@345 -- # : 1 00:03:15.427 11:02:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.427 11:02:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.427 11:02:21 env -- scripts/common.sh@365 -- # decimal 1 00:03:15.427 11:02:21 env -- scripts/common.sh@353 -- # local d=1 00:03:15.427 11:02:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.427 11:02:21 env -- scripts/common.sh@355 -- # echo 1 00:03:15.427 11:02:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.427 11:02:21 env -- scripts/common.sh@366 -- # decimal 2 00:03:15.427 11:02:21 env -- scripts/common.sh@353 -- # local d=2 00:03:15.427 11:02:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.427 11:02:21 env -- scripts/common.sh@355 -- # echo 2 00:03:15.427 11:02:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.427 11:02:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.427 11:02:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.427 11:02:21 env -- scripts/common.sh@368 -- # return 0 00:03:15.427 11:02:21 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.427 11:02:21 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:15.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.427 --rc genhtml_branch_coverage=1 00:03:15.427 --rc genhtml_function_coverage=1 00:03:15.427 --rc genhtml_legend=1 00:03:15.427 --rc geninfo_all_blocks=1 00:03:15.427 --rc geninfo_unexecuted_blocks=1 00:03:15.427 00:03:15.427 ' 00:03:15.427 11:02:21 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:15.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.427 --rc genhtml_branch_coverage=1 00:03:15.427 --rc genhtml_function_coverage=1 00:03:15.427 --rc genhtml_legend=1 00:03:15.427 --rc geninfo_all_blocks=1 00:03:15.427 --rc geninfo_unexecuted_blocks=1 00:03:15.427 00:03:15.427 ' 00:03:15.427 11:02:21 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:15.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.427 --rc genhtml_branch_coverage=1 00:03:15.427 --rc genhtml_function_coverage=1 00:03:15.427 --rc genhtml_legend=1 00:03:15.427 --rc geninfo_all_blocks=1 00:03:15.427 --rc geninfo_unexecuted_blocks=1 00:03:15.427 00:03:15.427 ' 00:03:15.427 11:02:21 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:15.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.427 --rc genhtml_branch_coverage=1 00:03:15.427 --rc genhtml_function_coverage=1 00:03:15.427 --rc genhtml_legend=1 00:03:15.427 --rc geninfo_all_blocks=1 00:03:15.427 --rc geninfo_unexecuted_blocks=1 00:03:15.427 00:03:15.427 ' 00:03:15.427 11:02:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:15.427 11:02:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:15.427 11:02:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:15.427 11:02:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:15.427 ************************************ 00:03:15.427 START TEST env_memory 00:03:15.427 ************************************ 00:03:15.427 11:02:21 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:15.427 00:03:15.427 00:03:15.427 CUnit - A unit testing framework for C - Version 2.1-3 00:03:15.427 http://cunit.sourceforge.net/ 00:03:15.427 00:03:15.427 00:03:15.427 Suite: memory 00:03:15.427 Test: alloc and free memory map ...[2024-12-06 11:02:21.510835] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:15.427 passed 00:03:15.427 Test: mem map translation ...[2024-12-06 11:02:21.536359] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:15.427 [2024-12-06 11:02:21.536388] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:15.427 [2024-12-06 11:02:21.536435] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:15.427 [2024-12-06 11:02:21.536447] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:15.427 passed 00:03:15.427 Test: mem map registration ...[2024-12-06 11:02:21.591738] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:15.427 [2024-12-06 11:02:21.591760] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:15.689 passed 00:03:15.689 Test: mem map adjacent registrations ...passed 00:03:15.689 00:03:15.689 Run Summary: Type Total Ran Passed Failed Inactive 00:03:15.689 suites 1 1 n/a 0 0 00:03:15.689 tests 4 4 4 0 0 00:03:15.689 asserts 152 152 152 0 n/a 00:03:15.689 00:03:15.689 Elapsed time = 0.195 seconds 00:03:15.689 00:03:15.689 real 0m0.210s 00:03:15.689 user 0m0.196s 00:03:15.689 sys 0m0.012s 00:03:15.689 11:02:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:15.689 11:02:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:15.689 ************************************ 00:03:15.689 END TEST env_memory 00:03:15.689 ************************************ 00:03:15.689 11:02:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:15.689 11:02:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:15.689 11:02:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:15.689 11:02:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:15.689 ************************************ 00:03:15.689 START TEST env_vtophys 00:03:15.689 ************************************ 00:03:15.689 11:02:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:15.689 EAL: lib.eal log level changed from notice to debug 00:03:15.689 EAL: Detected lcore 0 as core 0 on socket 0 00:03:15.689 EAL: Detected lcore 1 as core 1 on socket 0 00:03:15.689 EAL: Detected lcore 2 as core 2 on socket 0 00:03:15.689 EAL: Detected lcore 3 as core 3 on socket 0 00:03:15.689 EAL: Detected lcore 4 as core 4 on socket 0 00:03:15.689 EAL: Detected lcore 5 as core 5 on socket 0 00:03:15.689 EAL: Detected lcore 6 as core 6 on socket 0 00:03:15.689 EAL: Detected lcore 7 as core 7 on socket 0 00:03:15.689 EAL: Detected lcore 8 as core 8 on socket 0 00:03:15.689 EAL: Detected lcore 9 as core 9 on socket 0 00:03:15.689 EAL: Detected lcore 10 as core 10 on socket 0 00:03:15.689 EAL: Detected lcore 11 as core 11 on socket 0 00:03:15.689 EAL: Detected lcore 12 as core 12 on socket 0 00:03:15.689 EAL: Detected lcore 13 as core 13 on socket 0 00:03:15.689 EAL: Detected lcore 14 as core 14 on socket 0 00:03:15.689 EAL: Detected lcore 15 as core 15 on socket 0 00:03:15.689 EAL: Detected lcore 16 as core 16 on socket 0 00:03:15.689 EAL: Detected lcore 17 as core 17 on socket 0 00:03:15.689 EAL: Detected lcore 18 as core 18 on socket 0 00:03:15.689 EAL: Detected lcore 19 as core 19 on socket 0 00:03:15.689 EAL: Detected lcore 20 as core 20 on socket 0 00:03:15.689 EAL: Detected lcore 21 as core 21 on socket 0 00:03:15.689 EAL: Detected lcore 22 as core 22 on socket 0 00:03:15.689 EAL: Detected lcore 23 as core 23 on socket 0 00:03:15.689 EAL: Detected lcore 24 as core 24 on socket 0 00:03:15.689 EAL: Detected lcore 25 as core 25 on socket 0 00:03:15.689 EAL: Detected lcore 26 as core 26 on socket 0 00:03:15.689 EAL: Detected lcore 27 as core 27 on socket 0 00:03:15.689 EAL: Detected lcore 28 as core 28 on socket 0 00:03:15.689 EAL: Detected lcore 29 as core 29 on socket 0 00:03:15.689 EAL: Detected lcore 30 as core 30 on socket 0 00:03:15.689 EAL: Detected lcore 31 as core 31 on socket 0 00:03:15.689 EAL: Detected lcore 32 as core 32 on socket 0 00:03:15.689 EAL: Detected lcore 33 as core 33 on socket 0 00:03:15.689 EAL: Detected lcore 34 as core 34 on socket 0 00:03:15.689 EAL: Detected lcore 35 as core 35 on socket 0 00:03:15.689 EAL: Detected lcore 36 as core 0 on socket 1 00:03:15.689 EAL: Detected lcore 37 as core 1 on socket 1 00:03:15.689 EAL: Detected lcore 38 as core 2 on socket 1 00:03:15.689 EAL: Detected lcore 39 as core 3 on socket 1 00:03:15.689 EAL: Detected lcore 40 as core 4 on socket 1 00:03:15.689 EAL: Detected lcore 41 as core 5 on socket 1 00:03:15.689 EAL: Detected lcore 42 as core 6 on socket 1 00:03:15.689 EAL: Detected lcore 43 as core 7 on socket 1 00:03:15.689 EAL: Detected lcore 44 as core 8 on socket 1 00:03:15.690 EAL: Detected lcore 45 as core 9 on socket 1 00:03:15.690 EAL: Detected lcore 46 as core 10 on socket 1 00:03:15.690 EAL: Detected lcore 47 as core 11 on socket 1 00:03:15.690 EAL: Detected lcore 48 as core 12 on socket 1 00:03:15.690 EAL: Detected lcore 49 as core 13 on socket 1 00:03:15.690 EAL: Detected lcore 50 as core 14 on socket 1 00:03:15.690 EAL: Detected lcore 51 as core 15 on socket 1 00:03:15.690 EAL: Detected lcore 52 as core 16 on socket 1 00:03:15.690 EAL: Detected lcore 53 as core 17 on socket 1 00:03:15.690 EAL: Detected lcore 54 as core 18 on socket 1 00:03:15.690 EAL: Detected lcore 55 as core 19 on socket 1 00:03:15.690 EAL: Detected lcore 56 as core 20 on socket 1 00:03:15.690 EAL: Detected lcore 57 as core 21 on socket 1 00:03:15.690 EAL: Detected lcore 58 as core 22 on socket 1 00:03:15.690 EAL: Detected lcore 59 as core 23 on socket 1 00:03:15.690 EAL: Detected lcore 60 as core 24 on socket 1 00:03:15.690 EAL: Detected lcore 61 as core 25 on socket 1 00:03:15.690 EAL: Detected lcore 62 as core 26 on socket 1 00:03:15.690 EAL: Detected lcore 63 as core 27 on socket 1 00:03:15.690 EAL: Detected lcore 64 as core 28 on socket 1 00:03:15.690 EAL: Detected lcore 65 as core 29 on socket 1 00:03:15.690 EAL: Detected lcore 66 as core 30 on socket 1 00:03:15.690 EAL: Detected lcore 67 as core 31 on socket 1 00:03:15.690 EAL: Detected lcore 68 as core 32 on socket 1 00:03:15.690 EAL: Detected lcore 69 as core 33 on socket 1 00:03:15.690 EAL: Detected lcore 70 as core 34 on socket 1 00:03:15.690 EAL: Detected lcore 71 as core 35 on socket 1 00:03:15.690 EAL: Detected lcore 72 as core 0 on socket 0 00:03:15.690 EAL: Detected lcore 73 as core 1 on socket 0 00:03:15.690 EAL: Detected lcore 74 as core 2 on socket 0 00:03:15.690 EAL: Detected lcore 75 as core 3 on socket 0 00:03:15.690 EAL: Detected lcore 76 as core 4 on socket 0 00:03:15.690 EAL: Detected lcore 77 as core 5 on socket 0 00:03:15.690 EAL: Detected lcore 78 as core 6 on socket 0 00:03:15.690 EAL: Detected lcore 79 as core 7 on socket 0 00:03:15.690 EAL: Detected lcore 80 as core 8 on socket 0 00:03:15.690 EAL: Detected lcore 81 as core 9 on socket 0 00:03:15.690 EAL: Detected lcore 82 as core 10 on socket 0 00:03:15.690 EAL: Detected lcore 83 as core 11 on socket 0 00:03:15.690 EAL: Detected lcore 84 as core 12 on socket 0 00:03:15.690 EAL: Detected lcore 85 as core 13 on socket 0 00:03:15.690 EAL: Detected lcore 86 as core 14 on socket 0 00:03:15.690 EAL: Detected lcore 87 as core 15 on socket 0 00:03:15.690 EAL: Detected lcore 88 as core 16 on socket 0 00:03:15.690 EAL: Detected lcore 89 as core 17 on socket 0 00:03:15.690 EAL: Detected lcore 90 as core 18 on socket 0 00:03:15.690 EAL: Detected lcore 91 as core 19 on socket 0 00:03:15.690 EAL: Detected lcore 92 as core 20 on socket 0 00:03:15.690 EAL: Detected lcore 93 as core 21 on socket 0 00:03:15.690 EAL: Detected lcore 94 as core 22 on socket 0 00:03:15.690 EAL: Detected lcore 95 as core 23 on socket 0 00:03:15.690 EAL: Detected lcore 96 as core 24 on socket 0 00:03:15.690 EAL: Detected lcore 97 as core 25 on socket 0 00:03:15.690 EAL: Detected lcore 98 as core 26 on socket 0 00:03:15.690 EAL: Detected lcore 99 as core 27 on socket 0 00:03:15.690 EAL: Detected lcore 100 as core 28 on socket 0 00:03:15.690 EAL: Detected lcore 101 as core 29 on socket 0 00:03:15.690 EAL: Detected lcore 102 as core 30 on socket 0 00:03:15.690 EAL: Detected lcore 103 as core 31 on socket 0 00:03:15.690 EAL: Detected lcore 104 as core 32 on socket 0 00:03:15.690 EAL: Detected lcore 105 as core 33 on socket 0 00:03:15.690 EAL: Detected lcore 106 as core 34 on socket 0 00:03:15.690 EAL: Detected lcore 107 as core 35 on socket 0 00:03:15.690 EAL: Detected lcore 108 as core 0 on socket 1 00:03:15.690 EAL: Detected lcore 109 as core 1 on socket 1 00:03:15.690 EAL: Detected lcore 110 as core 2 on socket 1 00:03:15.690 EAL: Detected lcore 111 as core 3 on socket 1 00:03:15.690 EAL: Detected lcore 112 as core 4 on socket 1 00:03:15.690 EAL: Detected lcore 113 as core 5 on socket 1 00:03:15.690 EAL: Detected lcore 114 as core 6 on socket 1 00:03:15.690 EAL: Detected lcore 115 as core 7 on socket 1 00:03:15.690 EAL: Detected lcore 116 as core 8 on socket 1 00:03:15.690 EAL: Detected lcore 117 as core 9 on socket 1 00:03:15.690 EAL: Detected lcore 118 as core 10 on socket 1 00:03:15.690 EAL: Detected lcore 119 as core 11 on socket 1 00:03:15.690 EAL: Detected lcore 120 as core 12 on socket 1 00:03:15.690 EAL: Detected lcore 121 as core 13 on socket 1 00:03:15.690 EAL: Detected lcore 122 as core 14 on socket 1 00:03:15.690 EAL: Detected lcore 123 as core 15 on socket 1 00:03:15.690 EAL: Detected lcore 124 as core 16 on socket 1 00:03:15.690 EAL: Detected lcore 125 as core 17 on socket 1 00:03:15.690 EAL: Detected lcore 126 as core 18 on socket 1 00:03:15.690 EAL: Detected lcore 127 as core 19 on socket 1 00:03:15.690 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:15.690 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:15.690 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:15.690 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:15.690 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:15.690 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:15.690 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:15.690 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:15.690 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:15.690 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:15.690 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:15.690 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:15.690 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:15.690 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:15.690 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:15.690 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:15.690 EAL: Maximum logical cores by configuration: 128 00:03:15.690 EAL: Detected CPU lcores: 128 00:03:15.690 EAL: Detected NUMA nodes: 2 00:03:15.690 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:15.690 EAL: Detected shared linkage of DPDK 00:03:15.690 EAL: No shared files mode enabled, IPC will be disabled 00:03:15.690 EAL: Bus pci wants IOVA as 'DC' 00:03:15.690 EAL: Buses did not request a specific IOVA mode. 00:03:15.690 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:15.690 EAL: Selected IOVA mode 'VA' 00:03:15.690 EAL: Probing VFIO support... 00:03:15.690 EAL: IOMMU type 1 (Type 1) is supported 00:03:15.690 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:15.690 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:15.690 EAL: VFIO support initialized 00:03:15.690 EAL: Ask a virtual area of 0x2e000 bytes 00:03:15.690 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:15.690 EAL: Setting up physically contiguous memory... 00:03:15.690 EAL: Setting maximum number of open files to 524288 00:03:15.690 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:15.690 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:15.690 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:15.690 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.690 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:15.690 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.690 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.690 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:15.690 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:15.690 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.690 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:15.690 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.690 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.690 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:15.690 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:15.690 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.690 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:15.690 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.690 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.690 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:15.690 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:15.690 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.690 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:15.690 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.690 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.690 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:15.690 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:15.690 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:15.690 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.690 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:15.690 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.690 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.690 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:15.690 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:15.690 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.690 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:15.690 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.690 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.690 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:15.690 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:15.690 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.690 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:15.690 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.690 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.690 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:15.690 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:15.690 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.690 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:15.690 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.690 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.690 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:15.690 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:15.690 EAL: Hugepages will be freed exactly as allocated. 00:03:15.690 EAL: No shared files mode enabled, IPC is disabled 00:03:15.690 EAL: No shared files mode enabled, IPC is disabled 00:03:15.690 EAL: TSC frequency is ~2400000 KHz 00:03:15.690 EAL: Main lcore 0 is ready (tid=7fef0697ea00;cpuset=[0]) 00:03:15.690 EAL: Trying to obtain current memory policy. 00:03:15.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.690 EAL: Restoring previous memory policy: 0 00:03:15.690 EAL: request: mp_malloc_sync 00:03:15.690 EAL: No shared files mode enabled, IPC is disabled 00:03:15.690 EAL: Heap on socket 0 was expanded by 2MB 00:03:15.690 EAL: No shared files mode enabled, IPC is disabled 00:03:15.690 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:15.690 EAL: Mem event callback 'spdk:(nil)' registered 00:03:15.953 00:03:15.953 00:03:15.953 CUnit - A unit testing framework for C - Version 2.1-3 00:03:15.953 http://cunit.sourceforge.net/ 00:03:15.953 00:03:15.953 00:03:15.953 Suite: components_suite 00:03:15.953 Test: vtophys_malloc_test ...passed 00:03:15.953 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.953 EAL: Restoring previous memory policy: 4 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was expanded by 4MB 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was shrunk by 4MB 00:03:15.953 EAL: Trying to obtain current memory policy. 00:03:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.953 EAL: Restoring previous memory policy: 4 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was expanded by 6MB 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was shrunk by 6MB 00:03:15.953 EAL: Trying to obtain current memory policy. 00:03:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.953 EAL: Restoring previous memory policy: 4 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was expanded by 10MB 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was shrunk by 10MB 00:03:15.953 EAL: Trying to obtain current memory policy. 00:03:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.953 EAL: Restoring previous memory policy: 4 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was expanded by 18MB 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was shrunk by 18MB 00:03:15.953 EAL: Trying to obtain current memory policy. 00:03:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.953 EAL: Restoring previous memory policy: 4 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was expanded by 34MB 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was shrunk by 34MB 00:03:15.953 EAL: Trying to obtain current memory policy. 00:03:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.953 EAL: Restoring previous memory policy: 4 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was expanded by 66MB 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was shrunk by 66MB 00:03:15.953 EAL: Trying to obtain current memory policy. 00:03:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.953 EAL: Restoring previous memory policy: 4 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was expanded by 130MB 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was shrunk by 130MB 00:03:15.953 EAL: Trying to obtain current memory policy. 00:03:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.953 EAL: Restoring previous memory policy: 4 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was expanded by 258MB 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was shrunk by 258MB 00:03:15.953 EAL: Trying to obtain current memory policy. 00:03:15.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.953 EAL: Restoring previous memory policy: 4 00:03:15.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.953 EAL: request: mp_malloc_sync 00:03:15.953 EAL: No shared files mode enabled, IPC is disabled 00:03:15.953 EAL: Heap on socket 0 was expanded by 514MB 00:03:16.214 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.214 EAL: request: mp_malloc_sync 00:03:16.214 EAL: No shared files mode enabled, IPC is disabled 00:03:16.214 EAL: Heap on socket 0 was shrunk by 514MB 00:03:16.214 EAL: Trying to obtain current memory policy. 00:03:16.214 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.214 EAL: Restoring previous memory policy: 4 00:03:16.214 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.214 EAL: request: mp_malloc_sync 00:03:16.214 EAL: No shared files mode enabled, IPC is disabled 00:03:16.214 EAL: Heap on socket 0 was expanded by 1026MB 00:03:16.476 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.476 EAL: request: mp_malloc_sync 00:03:16.476 EAL: No shared files mode enabled, IPC is disabled 00:03:16.476 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:16.476 passed 00:03:16.476 00:03:16.476 Run Summary: Type Total Ran Passed Failed Inactive 00:03:16.476 suites 1 1 n/a 0 0 00:03:16.476 tests 2 2 2 0 0 00:03:16.476 asserts 497 497 497 0 n/a 00:03:16.476 00:03:16.476 Elapsed time = 0.659 seconds 00:03:16.476 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.476 EAL: request: mp_malloc_sync 00:03:16.476 EAL: No shared files mode enabled, IPC is disabled 00:03:16.476 EAL: Heap on socket 0 was shrunk by 2MB 00:03:16.476 EAL: No shared files mode enabled, IPC is disabled 00:03:16.476 EAL: No shared files mode enabled, IPC is disabled 00:03:16.476 EAL: No shared files mode enabled, IPC is disabled 00:03:16.476 00:03:16.476 real 0m0.805s 00:03:16.476 user 0m0.414s 00:03:16.476 sys 0m0.360s 00:03:16.476 11:02:22 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:16.476 11:02:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:16.476 ************************************ 00:03:16.476 END TEST env_vtophys 00:03:16.476 ************************************ 00:03:16.476 11:02:22 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:16.476 11:02:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:16.476 11:02:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:16.476 11:02:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.476 ************************************ 00:03:16.476 START TEST env_pci 00:03:16.476 ************************************ 00:03:16.476 11:02:22 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:16.738 00:03:16.738 00:03:16.738 CUnit - A unit testing framework for C - Version 2.1-3 00:03:16.738 http://cunit.sourceforge.net/ 00:03:16.738 00:03:16.738 00:03:16.738 Suite: pci 00:03:16.738 Test: pci_hook ...[2024-12-06 11:02:22.645313] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3165962 has claimed it 00:03:16.738 EAL: Cannot find device (10000:00:01.0) 00:03:16.738 EAL: Failed to attach device on primary process 00:03:16.738 passed 00:03:16.738 00:03:16.738 Run Summary: Type Total Ran Passed Failed Inactive 00:03:16.738 suites 1 1 n/a 0 0 00:03:16.738 tests 1 1 1 0 0 00:03:16.738 asserts 25 25 25 0 n/a 00:03:16.738 00:03:16.738 Elapsed time = 0.036 seconds 00:03:16.738 00:03:16.738 real 0m0.057s 00:03:16.738 user 0m0.019s 00:03:16.738 sys 0m0.037s 00:03:16.738 11:02:22 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:16.738 11:02:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:16.738 ************************************ 00:03:16.738 END TEST env_pci 00:03:16.738 ************************************ 00:03:16.738 11:02:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:16.738 11:02:22 env -- env/env.sh@15 -- # uname 00:03:16.738 11:02:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:16.738 11:02:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:16.738 11:02:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:16.738 11:02:22 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:16.738 11:02:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:16.738 11:02:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.738 ************************************ 00:03:16.738 START TEST env_dpdk_post_init 00:03:16.738 ************************************ 00:03:16.738 11:02:22 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:16.738 EAL: Detected CPU lcores: 128 00:03:16.738 EAL: Detected NUMA nodes: 2 00:03:16.738 EAL: Detected shared linkage of DPDK 00:03:16.738 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:16.738 EAL: Selected IOVA mode 'VA' 00:03:16.738 EAL: VFIO support initialized 00:03:16.738 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:16.999 EAL: Using IOMMU type 1 (Type 1) 00:03:16.999 EAL: Ignore mapping IO port bar(1) 00:03:16.999 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:17.260 EAL: Ignore mapping IO port bar(1) 00:03:17.261 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:17.522 EAL: Ignore mapping IO port bar(1) 00:03:17.522 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:17.784 EAL: Ignore mapping IO port bar(1) 00:03:17.784 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:17.784 EAL: Ignore mapping IO port bar(1) 00:03:18.045 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:18.045 EAL: Ignore mapping IO port bar(1) 00:03:18.306 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:18.306 EAL: Ignore mapping IO port bar(1) 00:03:18.567 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:18.567 EAL: Ignore mapping IO port bar(1) 00:03:18.567 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:18.829 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:19.106 EAL: Ignore mapping IO port bar(1) 00:03:19.106 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:19.106 EAL: Ignore mapping IO port bar(1) 00:03:19.369 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:19.369 EAL: Ignore mapping IO port bar(1) 00:03:19.630 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:19.630 EAL: Ignore mapping IO port bar(1) 00:03:19.891 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:19.891 EAL: Ignore mapping IO port bar(1) 00:03:19.891 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:20.151 EAL: Ignore mapping IO port bar(1) 00:03:20.151 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:20.412 EAL: Ignore mapping IO port bar(1) 00:03:20.412 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:20.673 EAL: Ignore mapping IO port bar(1) 00:03:20.673 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:20.673 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:20.673 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:20.673 Starting DPDK initialization... 00:03:20.673 Starting SPDK post initialization... 00:03:20.673 SPDK NVMe probe 00:03:20.673 Attaching to 0000:65:00.0 00:03:20.673 Attached to 0000:65:00.0 00:03:20.673 Cleaning up... 00:03:22.590 00:03:22.590 real 0m5.741s 00:03:22.590 user 0m0.113s 00:03:22.590 sys 0m0.174s 00:03:22.590 11:02:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.590 11:02:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:22.590 ************************************ 00:03:22.590 END TEST env_dpdk_post_init 00:03:22.590 ************************************ 00:03:22.590 11:02:28 env -- env/env.sh@26 -- # uname 00:03:22.590 11:02:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:22.590 11:02:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:22.590 11:02:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:22.590 11:02:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.590 11:02:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.590 ************************************ 00:03:22.590 START TEST env_mem_callbacks 00:03:22.590 ************************************ 00:03:22.590 11:02:28 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:22.590 EAL: Detected CPU lcores: 128 00:03:22.590 EAL: Detected NUMA nodes: 2 00:03:22.590 EAL: Detected shared linkage of DPDK 00:03:22.590 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:22.590 EAL: Selected IOVA mode 'VA' 00:03:22.590 EAL: VFIO support initialized 00:03:22.590 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:22.590 00:03:22.590 00:03:22.590 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.590 http://cunit.sourceforge.net/ 00:03:22.590 00:03:22.590 00:03:22.590 Suite: memory 00:03:22.590 Test: test ... 00:03:22.590 register 0x200000200000 2097152 00:03:22.590 malloc 3145728 00:03:22.590 register 0x200000400000 4194304 00:03:22.590 buf 0x200000500000 len 3145728 PASSED 00:03:22.590 malloc 64 00:03:22.590 buf 0x2000004fff40 len 64 PASSED 00:03:22.590 malloc 4194304 00:03:22.590 register 0x200000800000 6291456 00:03:22.590 buf 0x200000a00000 len 4194304 PASSED 00:03:22.590 free 0x200000500000 3145728 00:03:22.590 free 0x2000004fff40 64 00:03:22.590 unregister 0x200000400000 4194304 PASSED 00:03:22.590 free 0x200000a00000 4194304 00:03:22.590 unregister 0x200000800000 6291456 PASSED 00:03:22.591 malloc 8388608 00:03:22.591 register 0x200000400000 10485760 00:03:22.591 buf 0x200000600000 len 8388608 PASSED 00:03:22.591 free 0x200000600000 8388608 00:03:22.591 unregister 0x200000400000 10485760 PASSED 00:03:22.591 passed 00:03:22.591 00:03:22.591 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.591 suites 1 1 n/a 0 0 00:03:22.591 tests 1 1 1 0 0 00:03:22.591 asserts 15 15 15 0 n/a 00:03:22.591 00:03:22.591 Elapsed time = 0.006 seconds 00:03:22.591 00:03:22.591 real 0m0.069s 00:03:22.591 user 0m0.019s 00:03:22.591 sys 0m0.050s 00:03:22.591 11:02:28 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.591 11:02:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:22.591 ************************************ 00:03:22.591 END TEST env_mem_callbacks 00:03:22.591 ************************************ 00:03:22.591 00:03:22.591 real 0m7.484s 00:03:22.591 user 0m1.029s 00:03:22.591 sys 0m1.002s 00:03:22.591 11:02:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.591 11:02:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.591 ************************************ 00:03:22.591 END TEST env 00:03:22.591 ************************************ 00:03:22.591 11:02:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:22.591 11:02:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:22.591 11:02:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.591 11:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:22.853 ************************************ 00:03:22.853 START TEST rpc 00:03:22.853 ************************************ 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:22.853 * Looking for test storage... 00:03:22.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:22.853 11:02:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.853 11:02:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.853 11:02:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.853 11:02:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.853 11:02:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.853 11:02:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.853 11:02:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.853 11:02:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.853 11:02:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.853 11:02:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.853 11:02:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.853 11:02:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:22.853 11:02:28 rpc -- scripts/common.sh@345 -- # : 1 00:03:22.853 11:02:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.853 11:02:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.853 11:02:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:22.853 11:02:28 rpc -- scripts/common.sh@353 -- # local d=1 00:03:22.853 11:02:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.853 11:02:28 rpc -- scripts/common.sh@355 -- # echo 1 00:03:22.853 11:02:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.853 11:02:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:22.853 11:02:28 rpc -- scripts/common.sh@353 -- # local d=2 00:03:22.853 11:02:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.853 11:02:28 rpc -- scripts/common.sh@355 -- # echo 2 00:03:22.853 11:02:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.853 11:02:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.853 11:02:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.853 11:02:28 rpc -- scripts/common.sh@368 -- # return 0 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:22.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.853 --rc genhtml_branch_coverage=1 00:03:22.853 --rc genhtml_function_coverage=1 00:03:22.853 --rc genhtml_legend=1 00:03:22.853 --rc geninfo_all_blocks=1 00:03:22.853 --rc geninfo_unexecuted_blocks=1 00:03:22.853 00:03:22.853 ' 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:22.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.853 --rc genhtml_branch_coverage=1 00:03:22.853 --rc genhtml_function_coverage=1 00:03:22.853 --rc genhtml_legend=1 00:03:22.853 --rc geninfo_all_blocks=1 00:03:22.853 --rc geninfo_unexecuted_blocks=1 00:03:22.853 00:03:22.853 ' 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:22.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.853 --rc genhtml_branch_coverage=1 00:03:22.853 --rc genhtml_function_coverage=1 00:03:22.853 --rc genhtml_legend=1 00:03:22.853 --rc geninfo_all_blocks=1 00:03:22.853 --rc geninfo_unexecuted_blocks=1 00:03:22.853 00:03:22.853 ' 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:22.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.853 --rc genhtml_branch_coverage=1 00:03:22.853 --rc genhtml_function_coverage=1 00:03:22.853 --rc genhtml_legend=1 00:03:22.853 --rc geninfo_all_blocks=1 00:03:22.853 --rc geninfo_unexecuted_blocks=1 00:03:22.853 00:03:22.853 ' 00:03:22.853 11:02:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3167299 00:03:22.853 11:02:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:22.853 11:02:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3167299 00:03:22.853 11:02:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@835 -- # '[' -z 3167299 ']' 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:22.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:22.853 11:02:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.115 [2024-12-06 11:02:29.056291] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:23.115 [2024-12-06 11:02:29.056367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167299 ] 00:03:23.115 [2024-12-06 11:02:29.140066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:23.115 [2024-12-06 11:02:29.181324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:23.115 [2024-12-06 11:02:29.181360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3167299' to capture a snapshot of events at runtime. 00:03:23.115 [2024-12-06 11:02:29.181368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:23.115 [2024-12-06 11:02:29.181374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:23.116 [2024-12-06 11:02:29.181380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3167299 for offline analysis/debug. 00:03:23.116 [2024-12-06 11:02:29.182000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:24.058 11:02:29 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:24.058 11:02:29 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:24.058 11:02:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:24.058 11:02:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:24.058 11:02:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:24.058 11:02:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:24.058 11:02:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:24.058 11:02:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:24.058 11:02:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.058 ************************************ 00:03:24.058 START TEST rpc_integrity 00:03:24.058 ************************************ 00:03:24.058 11:02:29 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:24.058 11:02:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:24.058 11:02:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.058 11:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.058 11:02:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.058 11:02:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:24.058 11:02:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:24.058 11:02:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:24.058 11:02:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:24.058 11:02:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.058 11:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.058 11:02:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.058 11:02:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:24.058 11:02:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:24.058 11:02:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.058 11:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.059 11:02:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.059 11:02:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:24.059 { 00:03:24.059 "name": "Malloc0", 00:03:24.059 "aliases": [ 00:03:24.059 "c7411a7d-34b5-410f-b2b9-6a6271b5fd47" 00:03:24.059 ], 00:03:24.059 "product_name": "Malloc disk", 00:03:24.059 "block_size": 512, 00:03:24.059 "num_blocks": 16384, 00:03:24.059 "uuid": "c7411a7d-34b5-410f-b2b9-6a6271b5fd47", 00:03:24.059 "assigned_rate_limits": { 00:03:24.059 "rw_ios_per_sec": 0, 00:03:24.059 "rw_mbytes_per_sec": 0, 00:03:24.059 "r_mbytes_per_sec": 0, 00:03:24.059 "w_mbytes_per_sec": 0 00:03:24.059 }, 00:03:24.059 "claimed": false, 00:03:24.059 "zoned": false, 00:03:24.059 "supported_io_types": { 00:03:24.059 "read": true, 00:03:24.059 "write": true, 00:03:24.059 "unmap": true, 00:03:24.059 "flush": true, 00:03:24.059 "reset": true, 00:03:24.059 "nvme_admin": false, 00:03:24.059 "nvme_io": false, 00:03:24.059 "nvme_io_md": false, 00:03:24.059 "write_zeroes": true, 00:03:24.059 "zcopy": true, 00:03:24.059 "get_zone_info": false, 00:03:24.059 "zone_management": false, 00:03:24.059 "zone_append": false, 00:03:24.059 "compare": false, 00:03:24.059 "compare_and_write": false, 00:03:24.059 "abort": true, 00:03:24.059 "seek_hole": false, 00:03:24.059 "seek_data": false, 00:03:24.059 "copy": true, 00:03:24.059 "nvme_iov_md": false 00:03:24.059 }, 00:03:24.059 "memory_domains": [ 00:03:24.059 { 00:03:24.059 "dma_device_id": "system", 00:03:24.059 "dma_device_type": 1 00:03:24.059 }, 00:03:24.059 { 00:03:24.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.059 "dma_device_type": 2 00:03:24.059 } 00:03:24.059 ], 00:03:24.059 "driver_specific": {} 00:03:24.059 } 00:03:24.059 ]' 00:03:24.059 11:02:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.059 [2024-12-06 11:02:30.012647] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:24.059 [2024-12-06 11:02:30.012679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:24.059 [2024-12-06 11:02:30.012693] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20b8140 00:03:24.059 [2024-12-06 11:02:30.012700] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:24.059 [2024-12-06 11:02:30.014078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:24.059 [2024-12-06 11:02:30.014099] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:24.059 Passthru0 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:24.059 { 00:03:24.059 "name": "Malloc0", 00:03:24.059 "aliases": [ 00:03:24.059 "c7411a7d-34b5-410f-b2b9-6a6271b5fd47" 00:03:24.059 ], 00:03:24.059 "product_name": "Malloc disk", 00:03:24.059 "block_size": 512, 00:03:24.059 "num_blocks": 16384, 00:03:24.059 "uuid": "c7411a7d-34b5-410f-b2b9-6a6271b5fd47", 00:03:24.059 "assigned_rate_limits": { 00:03:24.059 "rw_ios_per_sec": 0, 00:03:24.059 "rw_mbytes_per_sec": 0, 00:03:24.059 "r_mbytes_per_sec": 0, 00:03:24.059 "w_mbytes_per_sec": 0 00:03:24.059 }, 00:03:24.059 "claimed": true, 00:03:24.059 "claim_type": "exclusive_write", 00:03:24.059 "zoned": false, 00:03:24.059 "supported_io_types": { 00:03:24.059 "read": true, 00:03:24.059 "write": true, 00:03:24.059 "unmap": true, 00:03:24.059 "flush": true, 00:03:24.059 "reset": true, 00:03:24.059 "nvme_admin": false, 00:03:24.059 "nvme_io": false, 00:03:24.059 "nvme_io_md": false, 00:03:24.059 "write_zeroes": true, 00:03:24.059 "zcopy": true, 00:03:24.059 "get_zone_info": false, 00:03:24.059 "zone_management": false, 00:03:24.059 "zone_append": false, 00:03:24.059 "compare": false, 00:03:24.059 "compare_and_write": false, 00:03:24.059 "abort": true, 00:03:24.059 "seek_hole": false, 00:03:24.059 "seek_data": false, 00:03:24.059 "copy": true, 00:03:24.059 "nvme_iov_md": false 00:03:24.059 }, 00:03:24.059 "memory_domains": [ 00:03:24.059 { 00:03:24.059 "dma_device_id": "system", 00:03:24.059 "dma_device_type": 1 00:03:24.059 }, 00:03:24.059 { 00:03:24.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.059 "dma_device_type": 2 00:03:24.059 } 00:03:24.059 ], 00:03:24.059 "driver_specific": {} 00:03:24.059 }, 00:03:24.059 { 00:03:24.059 "name": "Passthru0", 00:03:24.059 "aliases": [ 00:03:24.059 "22e760f1-e2f7-5c68-af63-ea3311244067" 00:03:24.059 ], 00:03:24.059 "product_name": "passthru", 00:03:24.059 "block_size": 512, 00:03:24.059 "num_blocks": 16384, 00:03:24.059 "uuid": "22e760f1-e2f7-5c68-af63-ea3311244067", 00:03:24.059 "assigned_rate_limits": { 00:03:24.059 "rw_ios_per_sec": 0, 00:03:24.059 "rw_mbytes_per_sec": 0, 00:03:24.059 "r_mbytes_per_sec": 0, 00:03:24.059 "w_mbytes_per_sec": 0 00:03:24.059 }, 00:03:24.059 "claimed": false, 00:03:24.059 "zoned": false, 00:03:24.059 "supported_io_types": { 00:03:24.059 "read": true, 00:03:24.059 "write": true, 00:03:24.059 "unmap": true, 00:03:24.059 "flush": true, 00:03:24.059 "reset": true, 00:03:24.059 "nvme_admin": false, 00:03:24.059 "nvme_io": false, 00:03:24.059 "nvme_io_md": false, 00:03:24.059 "write_zeroes": true, 00:03:24.059 "zcopy": true, 00:03:24.059 "get_zone_info": false, 00:03:24.059 "zone_management": false, 00:03:24.059 "zone_append": false, 00:03:24.059 "compare": false, 00:03:24.059 "compare_and_write": false, 00:03:24.059 "abort": true, 00:03:24.059 "seek_hole": false, 00:03:24.059 "seek_data": false, 00:03:24.059 "copy": true, 00:03:24.059 "nvme_iov_md": false 00:03:24.059 }, 00:03:24.059 "memory_domains": [ 00:03:24.059 { 00:03:24.059 "dma_device_id": "system", 00:03:24.059 "dma_device_type": 1 00:03:24.059 }, 00:03:24.059 { 00:03:24.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.059 "dma_device_type": 2 00:03:24.059 } 00:03:24.059 ], 00:03:24.059 "driver_specific": { 00:03:24.059 "passthru": { 00:03:24.059 "name": "Passthru0", 00:03:24.059 "base_bdev_name": "Malloc0" 00:03:24.059 } 00:03:24.059 } 00:03:24.059 } 00:03:24.059 ]' 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:24.059 11:02:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:24.059 00:03:24.059 real 0m0.266s 00:03:24.059 user 0m0.165s 00:03:24.059 sys 0m0.030s 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:24.059 11:02:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.059 ************************************ 00:03:24.059 END TEST rpc_integrity 00:03:24.059 ************************************ 00:03:24.059 11:02:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:24.059 11:02:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:24.059 11:02:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:24.059 11:02:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.320 ************************************ 00:03:24.320 START TEST rpc_plugins 00:03:24.320 ************************************ 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:24.320 11:02:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.320 11:02:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:24.320 11:02:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.320 11:02:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:24.320 { 00:03:24.320 "name": "Malloc1", 00:03:24.320 "aliases": [ 00:03:24.320 "fa721c90-62e1-497b-bfc8-9e3b87bc6ce0" 00:03:24.320 ], 00:03:24.320 "product_name": "Malloc disk", 00:03:24.320 "block_size": 4096, 00:03:24.320 "num_blocks": 256, 00:03:24.320 "uuid": "fa721c90-62e1-497b-bfc8-9e3b87bc6ce0", 00:03:24.320 "assigned_rate_limits": { 00:03:24.320 "rw_ios_per_sec": 0, 00:03:24.320 "rw_mbytes_per_sec": 0, 00:03:24.320 "r_mbytes_per_sec": 0, 00:03:24.320 "w_mbytes_per_sec": 0 00:03:24.320 }, 00:03:24.320 "claimed": false, 00:03:24.320 "zoned": false, 00:03:24.320 "supported_io_types": { 00:03:24.320 "read": true, 00:03:24.320 "write": true, 00:03:24.320 "unmap": true, 00:03:24.320 "flush": true, 00:03:24.320 "reset": true, 00:03:24.320 "nvme_admin": false, 00:03:24.320 "nvme_io": false, 00:03:24.320 "nvme_io_md": false, 00:03:24.320 "write_zeroes": true, 00:03:24.320 "zcopy": true, 00:03:24.320 "get_zone_info": false, 00:03:24.320 "zone_management": false, 00:03:24.320 "zone_append": false, 00:03:24.320 "compare": false, 00:03:24.320 "compare_and_write": false, 00:03:24.320 "abort": true, 00:03:24.320 "seek_hole": false, 00:03:24.320 "seek_data": false, 00:03:24.320 "copy": true, 00:03:24.320 "nvme_iov_md": false 00:03:24.320 }, 00:03:24.320 "memory_domains": [ 00:03:24.320 { 00:03:24.320 "dma_device_id": "system", 00:03:24.320 "dma_device_type": 1 00:03:24.320 }, 00:03:24.320 { 00:03:24.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.320 "dma_device_type": 2 00:03:24.320 } 00:03:24.320 ], 00:03:24.320 "driver_specific": {} 00:03:24.320 } 00:03:24.320 ]' 00:03:24.320 11:02:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:24.320 11:02:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:24.320 11:02:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.320 11:02:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.320 11:02:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:24.320 11:02:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:24.320 11:02:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:24.320 00:03:24.320 real 0m0.146s 00:03:24.320 user 0m0.094s 00:03:24.320 sys 0m0.016s 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:24.320 11:02:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:24.320 ************************************ 00:03:24.320 END TEST rpc_plugins 00:03:24.320 ************************************ 00:03:24.320 11:02:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:24.320 11:02:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:24.320 11:02:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:24.320 11:02:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.320 ************************************ 00:03:24.320 START TEST rpc_trace_cmd_test 00:03:24.320 ************************************ 00:03:24.320 11:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:24.320 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:24.320 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:24.320 11:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.320 11:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:24.320 11:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.320 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:24.320 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3167299", 00:03:24.320 "tpoint_group_mask": "0x8", 00:03:24.320 "iscsi_conn": { 00:03:24.320 "mask": "0x2", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "scsi": { 00:03:24.320 "mask": "0x4", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "bdev": { 00:03:24.320 "mask": "0x8", 00:03:24.320 "tpoint_mask": "0xffffffffffffffff" 00:03:24.320 }, 00:03:24.320 "nvmf_rdma": { 00:03:24.320 "mask": "0x10", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "nvmf_tcp": { 00:03:24.320 "mask": "0x20", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "ftl": { 00:03:24.320 "mask": "0x40", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "blobfs": { 00:03:24.320 "mask": "0x80", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "dsa": { 00:03:24.320 "mask": "0x200", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "thread": { 00:03:24.320 "mask": "0x400", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "nvme_pcie": { 00:03:24.320 "mask": "0x800", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "iaa": { 00:03:24.320 "mask": "0x1000", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "nvme_tcp": { 00:03:24.320 "mask": "0x2000", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "bdev_nvme": { 00:03:24.320 "mask": "0x4000", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "sock": { 00:03:24.320 "mask": "0x8000", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "blob": { 00:03:24.320 "mask": "0x10000", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.320 "bdev_raid": { 00:03:24.320 "mask": "0x20000", 00:03:24.320 "tpoint_mask": "0x0" 00:03:24.320 }, 00:03:24.321 "scheduler": { 00:03:24.321 "mask": "0x40000", 00:03:24.321 "tpoint_mask": "0x0" 00:03:24.321 } 00:03:24.321 }' 00:03:24.321 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:24.581 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:24.581 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:24.581 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:24.581 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:24.581 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:24.581 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:24.581 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:24.581 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:24.581 11:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:24.581 00:03:24.581 real 0m0.198s 00:03:24.581 user 0m0.166s 00:03:24.581 sys 0m0.023s 00:03:24.581 11:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:24.581 11:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:24.581 ************************************ 00:03:24.581 END TEST rpc_trace_cmd_test 00:03:24.581 ************************************ 00:03:24.581 11:02:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:24.581 11:02:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:24.581 11:02:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:24.581 11:02:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:24.581 11:02:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:24.581 11:02:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.581 ************************************ 00:03:24.581 START TEST rpc_daemon_integrity 00:03:24.581 ************************************ 00:03:24.581 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:24.581 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:24.581 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.581 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.581 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.581 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:24.581 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:24.841 { 00:03:24.841 "name": "Malloc2", 00:03:24.841 "aliases": [ 00:03:24.841 "0b6b10dd-9589-47fc-9b68-9f1b44960f0b" 00:03:24.841 ], 00:03:24.841 "product_name": "Malloc disk", 00:03:24.841 "block_size": 512, 00:03:24.841 "num_blocks": 16384, 00:03:24.841 "uuid": "0b6b10dd-9589-47fc-9b68-9f1b44960f0b", 00:03:24.841 "assigned_rate_limits": { 00:03:24.841 "rw_ios_per_sec": 0, 00:03:24.841 "rw_mbytes_per_sec": 0, 00:03:24.841 "r_mbytes_per_sec": 0, 00:03:24.841 "w_mbytes_per_sec": 0 00:03:24.841 }, 00:03:24.841 "claimed": false, 00:03:24.841 "zoned": false, 00:03:24.841 "supported_io_types": { 00:03:24.841 "read": true, 00:03:24.841 "write": true, 00:03:24.841 "unmap": true, 00:03:24.841 "flush": true, 00:03:24.841 "reset": true, 00:03:24.841 "nvme_admin": false, 00:03:24.841 "nvme_io": false, 00:03:24.841 "nvme_io_md": false, 00:03:24.841 "write_zeroes": true, 00:03:24.841 "zcopy": true, 00:03:24.841 "get_zone_info": false, 00:03:24.841 "zone_management": false, 00:03:24.841 "zone_append": false, 00:03:24.841 "compare": false, 00:03:24.841 "compare_and_write": false, 00:03:24.841 "abort": true, 00:03:24.841 "seek_hole": false, 00:03:24.841 "seek_data": false, 00:03:24.841 "copy": true, 00:03:24.841 "nvme_iov_md": false 00:03:24.841 }, 00:03:24.841 "memory_domains": [ 00:03:24.841 { 00:03:24.841 "dma_device_id": "system", 00:03:24.841 "dma_device_type": 1 00:03:24.841 }, 00:03:24.841 { 00:03:24.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.841 "dma_device_type": 2 00:03:24.841 } 00:03:24.841 ], 00:03:24.841 "driver_specific": {} 00:03:24.841 } 00:03:24.841 ]' 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.841 [2024-12-06 11:02:30.854928] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:24.841 [2024-12-06 11:02:30.854958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:24.841 [2024-12-06 11:02:30.854971] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20055d0 00:03:24.841 [2024-12-06 11:02:30.854978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:24.841 [2024-12-06 11:02:30.856255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:24.841 [2024-12-06 11:02:30.856275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:24.841 Passthru0 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.841 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:24.841 { 00:03:24.841 "name": "Malloc2", 00:03:24.841 "aliases": [ 00:03:24.841 "0b6b10dd-9589-47fc-9b68-9f1b44960f0b" 00:03:24.841 ], 00:03:24.841 "product_name": "Malloc disk", 00:03:24.841 "block_size": 512, 00:03:24.841 "num_blocks": 16384, 00:03:24.841 "uuid": "0b6b10dd-9589-47fc-9b68-9f1b44960f0b", 00:03:24.841 "assigned_rate_limits": { 00:03:24.841 "rw_ios_per_sec": 0, 00:03:24.841 "rw_mbytes_per_sec": 0, 00:03:24.841 "r_mbytes_per_sec": 0, 00:03:24.841 "w_mbytes_per_sec": 0 00:03:24.841 }, 00:03:24.841 "claimed": true, 00:03:24.841 "claim_type": "exclusive_write", 00:03:24.841 "zoned": false, 00:03:24.841 "supported_io_types": { 00:03:24.841 "read": true, 00:03:24.841 "write": true, 00:03:24.841 "unmap": true, 00:03:24.841 "flush": true, 00:03:24.841 "reset": true, 00:03:24.841 "nvme_admin": false, 00:03:24.841 "nvme_io": false, 00:03:24.841 "nvme_io_md": false, 00:03:24.841 "write_zeroes": true, 00:03:24.841 "zcopy": true, 00:03:24.841 "get_zone_info": false, 00:03:24.841 "zone_management": false, 00:03:24.841 "zone_append": false, 00:03:24.842 "compare": false, 00:03:24.842 "compare_and_write": false, 00:03:24.842 "abort": true, 00:03:24.842 "seek_hole": false, 00:03:24.842 "seek_data": false, 00:03:24.842 "copy": true, 00:03:24.842 "nvme_iov_md": false 00:03:24.842 }, 00:03:24.842 "memory_domains": [ 00:03:24.842 { 00:03:24.842 "dma_device_id": "system", 00:03:24.842 "dma_device_type": 1 00:03:24.842 }, 00:03:24.842 { 00:03:24.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.842 "dma_device_type": 2 00:03:24.842 } 00:03:24.842 ], 00:03:24.842 "driver_specific": {} 00:03:24.842 }, 00:03:24.842 { 00:03:24.842 "name": "Passthru0", 00:03:24.842 "aliases": [ 00:03:24.842 "5e7b8584-ebef-5738-b949-0fcee2df9592" 00:03:24.842 ], 00:03:24.842 "product_name": "passthru", 00:03:24.842 "block_size": 512, 00:03:24.842 "num_blocks": 16384, 00:03:24.842 "uuid": "5e7b8584-ebef-5738-b949-0fcee2df9592", 00:03:24.842 "assigned_rate_limits": { 00:03:24.842 "rw_ios_per_sec": 0, 00:03:24.842 "rw_mbytes_per_sec": 0, 00:03:24.842 "r_mbytes_per_sec": 0, 00:03:24.842 "w_mbytes_per_sec": 0 00:03:24.842 }, 00:03:24.842 "claimed": false, 00:03:24.842 "zoned": false, 00:03:24.842 "supported_io_types": { 00:03:24.842 "read": true, 00:03:24.842 "write": true, 00:03:24.842 "unmap": true, 00:03:24.842 "flush": true, 00:03:24.842 "reset": true, 00:03:24.842 "nvme_admin": false, 00:03:24.842 "nvme_io": false, 00:03:24.842 "nvme_io_md": false, 00:03:24.842 "write_zeroes": true, 00:03:24.842 "zcopy": true, 00:03:24.842 "get_zone_info": false, 00:03:24.842 "zone_management": false, 00:03:24.842 "zone_append": false, 00:03:24.842 "compare": false, 00:03:24.842 "compare_and_write": false, 00:03:24.842 "abort": true, 00:03:24.842 "seek_hole": false, 00:03:24.842 "seek_data": false, 00:03:24.842 "copy": true, 00:03:24.842 "nvme_iov_md": false 00:03:24.842 }, 00:03:24.842 "memory_domains": [ 00:03:24.842 { 00:03:24.842 "dma_device_id": "system", 00:03:24.842 "dma_device_type": 1 00:03:24.842 }, 00:03:24.842 { 00:03:24.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.842 "dma_device_type": 2 00:03:24.842 } 00:03:24.842 ], 00:03:24.842 "driver_specific": { 00:03:24.842 "passthru": { 00:03:24.842 "name": "Passthru0", 00:03:24.842 "base_bdev_name": "Malloc2" 00:03:24.842 } 00:03:24.842 } 00:03:24.842 } 00:03:24.842 ]' 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:24.842 00:03:24.842 real 0m0.258s 00:03:24.842 user 0m0.149s 00:03:24.842 sys 0m0.047s 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:24.842 11:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.842 ************************************ 00:03:24.842 END TEST rpc_daemon_integrity 00:03:24.842 ************************************ 00:03:25.102 11:02:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:25.102 11:02:31 rpc -- rpc/rpc.sh@84 -- # killprocess 3167299 00:03:25.102 11:02:31 rpc -- common/autotest_common.sh@954 -- # '[' -z 3167299 ']' 00:03:25.102 11:02:31 rpc -- common/autotest_common.sh@958 -- # kill -0 3167299 00:03:25.102 11:02:31 rpc -- common/autotest_common.sh@959 -- # uname 00:03:25.102 11:02:31 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:25.102 11:02:31 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3167299 00:03:25.102 11:02:31 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:25.102 11:02:31 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:25.102 11:02:31 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3167299' 00:03:25.102 killing process with pid 3167299 00:03:25.102 11:02:31 rpc -- common/autotest_common.sh@973 -- # kill 3167299 00:03:25.102 11:02:31 rpc -- common/autotest_common.sh@978 -- # wait 3167299 00:03:25.363 00:03:25.363 real 0m2.504s 00:03:25.363 user 0m3.175s 00:03:25.363 sys 0m0.737s 00:03:25.363 11:02:31 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:25.363 11:02:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.363 ************************************ 00:03:25.363 END TEST rpc 00:03:25.363 ************************************ 00:03:25.363 11:02:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:25.363 11:02:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.363 11:02:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.363 11:02:31 -- common/autotest_common.sh@10 -- # set +x 00:03:25.363 ************************************ 00:03:25.363 START TEST skip_rpc 00:03:25.363 ************************************ 00:03:25.363 11:02:31 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:25.363 * Looking for test storage... 00:03:25.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:25.363 11:02:31 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:25.363 11:02:31 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:25.363 11:02:31 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:25.623 11:02:31 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.623 11:02:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:25.623 11:02:31 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.623 11:02:31 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:25.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.623 --rc genhtml_branch_coverage=1 00:03:25.623 --rc genhtml_function_coverage=1 00:03:25.623 --rc genhtml_legend=1 00:03:25.623 --rc geninfo_all_blocks=1 00:03:25.623 --rc geninfo_unexecuted_blocks=1 00:03:25.623 00:03:25.623 ' 00:03:25.623 11:02:31 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:25.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.623 --rc genhtml_branch_coverage=1 00:03:25.623 --rc genhtml_function_coverage=1 00:03:25.623 --rc genhtml_legend=1 00:03:25.623 --rc geninfo_all_blocks=1 00:03:25.623 --rc geninfo_unexecuted_blocks=1 00:03:25.623 00:03:25.623 ' 00:03:25.623 11:02:31 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:25.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.623 --rc genhtml_branch_coverage=1 00:03:25.623 --rc genhtml_function_coverage=1 00:03:25.623 --rc genhtml_legend=1 00:03:25.623 --rc geninfo_all_blocks=1 00:03:25.623 --rc geninfo_unexecuted_blocks=1 00:03:25.623 00:03:25.624 ' 00:03:25.624 11:02:31 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:25.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.624 --rc genhtml_branch_coverage=1 00:03:25.624 --rc genhtml_function_coverage=1 00:03:25.624 --rc genhtml_legend=1 00:03:25.624 --rc geninfo_all_blocks=1 00:03:25.624 --rc geninfo_unexecuted_blocks=1 00:03:25.624 00:03:25.624 ' 00:03:25.624 11:02:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:25.624 11:02:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:25.624 11:02:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:25.624 11:02:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.624 11:02:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.624 11:02:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.624 ************************************ 00:03:25.624 START TEST skip_rpc 00:03:25.624 ************************************ 00:03:25.624 11:02:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:25.624 11:02:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3167953 00:03:25.624 11:02:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:25.624 11:02:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:25.624 11:02:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:25.624 [2024-12-06 11:02:31.667486] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:25.624 [2024-12-06 11:02:31.667549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167953 ] 00:03:25.624 [2024-12-06 11:02:31.750225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:25.884 [2024-12-06 11:02:31.791599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3167953 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3167953 ']' 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3167953 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3167953 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:31.172 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3167953' 00:03:31.172 killing process with pid 3167953 00:03:31.173 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3167953 00:03:31.173 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3167953 00:03:31.173 00:03:31.173 real 0m5.285s 00:03:31.173 user 0m5.087s 00:03:31.173 sys 0m0.247s 00:03:31.173 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.173 11:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.173 ************************************ 00:03:31.173 END TEST skip_rpc 00:03:31.173 ************************************ 00:03:31.173 11:02:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:31.173 11:02:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.173 11:02:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.173 11:02:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.173 ************************************ 00:03:31.173 START TEST skip_rpc_with_json 00:03:31.173 ************************************ 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3169004 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3169004 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3169004 ']' 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:31.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:31.173 11:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:31.173 [2024-12-06 11:02:37.020198] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:31.173 [2024-12-06 11:02:37.020257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169004 ] 00:03:31.173 [2024-12-06 11:02:37.102456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.173 [2024-12-06 11:02:37.141883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:31.745 [2024-12-06 11:02:37.810929] nvmf_rpc.c:2872:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:31.745 request: 00:03:31.745 { 00:03:31.745 "trtype": "tcp", 00:03:31.745 "method": "nvmf_get_transports", 00:03:31.745 "req_id": 1 00:03:31.745 } 00:03:31.745 Got JSON-RPC error response 00:03:31.745 response: 00:03:31.745 { 00:03:31.745 "code": -19, 00:03:31.745 "message": "No such device" 00:03:31.745 } 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:31.745 [2024-12-06 11:02:37.823058] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.745 11:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.006 11:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:32.006 11:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:32.006 { 00:03:32.006 "subsystems": [ 00:03:32.006 { 00:03:32.006 "subsystem": "fsdev", 00:03:32.006 "config": [ 00:03:32.006 { 00:03:32.006 "method": "fsdev_set_opts", 00:03:32.006 "params": { 00:03:32.006 "fsdev_io_pool_size": 65535, 00:03:32.006 "fsdev_io_cache_size": 256 00:03:32.006 } 00:03:32.006 } 00:03:32.006 ] 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "subsystem": "vfio_user_target", 00:03:32.006 "config": null 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "subsystem": "keyring", 00:03:32.006 "config": [] 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "subsystem": "iobuf", 00:03:32.006 "config": [ 00:03:32.006 { 00:03:32.006 "method": "iobuf_set_options", 00:03:32.006 "params": { 00:03:32.006 "small_pool_count": 8192, 00:03:32.006 "large_pool_count": 1024, 00:03:32.006 "small_bufsize": 8192, 00:03:32.006 "large_bufsize": 135168, 00:03:32.006 "enable_numa": false 00:03:32.006 } 00:03:32.006 } 00:03:32.006 ] 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "subsystem": "sock", 00:03:32.006 "config": [ 00:03:32.006 { 00:03:32.006 "method": "sock_set_default_impl", 00:03:32.006 "params": { 00:03:32.006 "impl_name": "posix" 00:03:32.006 } 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "method": "sock_impl_set_options", 00:03:32.006 "params": { 00:03:32.006 "impl_name": "ssl", 00:03:32.006 "recv_buf_size": 4096, 00:03:32.006 "send_buf_size": 4096, 00:03:32.006 "enable_recv_pipe": true, 00:03:32.006 "enable_quickack": false, 00:03:32.006 "enable_placement_id": 0, 00:03:32.006 "enable_zerocopy_send_server": true, 00:03:32.006 "enable_zerocopy_send_client": false, 00:03:32.006 "zerocopy_threshold": 0, 00:03:32.006 "tls_version": 0, 00:03:32.006 "enable_ktls": false 00:03:32.006 } 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "method": "sock_impl_set_options", 00:03:32.006 "params": { 00:03:32.006 "impl_name": "posix", 00:03:32.006 "recv_buf_size": 2097152, 00:03:32.006 "send_buf_size": 2097152, 00:03:32.006 "enable_recv_pipe": true, 00:03:32.006 "enable_quickack": false, 00:03:32.006 "enable_placement_id": 0, 00:03:32.006 "enable_zerocopy_send_server": true, 00:03:32.006 "enable_zerocopy_send_client": false, 00:03:32.006 "zerocopy_threshold": 0, 00:03:32.006 "tls_version": 0, 00:03:32.006 "enable_ktls": false 00:03:32.006 } 00:03:32.006 } 00:03:32.006 ] 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "subsystem": "vmd", 00:03:32.006 "config": [] 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "subsystem": "accel", 00:03:32.006 "config": [ 00:03:32.006 { 00:03:32.006 "method": "accel_set_options", 00:03:32.006 "params": { 00:03:32.006 "small_cache_size": 128, 00:03:32.006 "large_cache_size": 16, 00:03:32.006 "task_count": 2048, 00:03:32.006 "sequence_count": 2048, 00:03:32.006 "buf_count": 2048 00:03:32.006 } 00:03:32.006 } 00:03:32.006 ] 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "subsystem": "bdev", 00:03:32.006 "config": [ 00:03:32.006 { 00:03:32.006 "method": "bdev_set_options", 00:03:32.006 "params": { 00:03:32.006 "bdev_io_pool_size": 65535, 00:03:32.006 "bdev_io_cache_size": 256, 00:03:32.006 "bdev_auto_examine": true, 00:03:32.006 "iobuf_small_cache_size": 128, 00:03:32.006 "iobuf_large_cache_size": 16 00:03:32.006 } 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "method": "bdev_raid_set_options", 00:03:32.006 "params": { 00:03:32.006 "process_window_size_kb": 1024, 00:03:32.006 "process_max_bandwidth_mb_sec": 0 00:03:32.006 } 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "method": "bdev_iscsi_set_options", 00:03:32.006 "params": { 00:03:32.006 "timeout_sec": 30 00:03:32.006 } 00:03:32.006 }, 00:03:32.006 { 00:03:32.006 "method": "bdev_nvme_set_options", 00:03:32.006 "params": { 00:03:32.006 "action_on_timeout": "none", 00:03:32.006 "timeout_us": 0, 00:03:32.006 "timeout_admin_us": 0, 00:03:32.006 "keep_alive_timeout_ms": 10000, 00:03:32.006 "arbitration_burst": 0, 00:03:32.006 "low_priority_weight": 0, 00:03:32.006 "medium_priority_weight": 0, 00:03:32.006 "high_priority_weight": 0, 00:03:32.006 "nvme_adminq_poll_period_us": 10000, 00:03:32.006 "nvme_ioq_poll_period_us": 0, 00:03:32.006 "io_queue_requests": 0, 00:03:32.006 "delay_cmd_submit": true, 00:03:32.006 "transport_retry_count": 4, 00:03:32.006 "bdev_retry_count": 3, 00:03:32.006 "transport_ack_timeout": 0, 00:03:32.006 "ctrlr_loss_timeout_sec": 0, 00:03:32.007 "reconnect_delay_sec": 0, 00:03:32.007 "fast_io_fail_timeout_sec": 0, 00:03:32.007 "disable_auto_failback": false, 00:03:32.007 "generate_uuids": false, 00:03:32.007 "transport_tos": 0, 00:03:32.007 "nvme_error_stat": false, 00:03:32.007 "rdma_srq_size": 0, 00:03:32.007 "io_path_stat": false, 00:03:32.007 "allow_accel_sequence": false, 00:03:32.007 "rdma_max_cq_size": 0, 00:03:32.007 "rdma_cm_event_timeout_ms": 0, 00:03:32.007 "dhchap_digests": [ 00:03:32.007 "sha256", 00:03:32.007 "sha384", 00:03:32.007 "sha512" 00:03:32.007 ], 00:03:32.007 "dhchap_dhgroups": [ 00:03:32.007 "null", 00:03:32.007 "ffdhe2048", 00:03:32.007 "ffdhe3072", 00:03:32.007 "ffdhe4096", 00:03:32.007 "ffdhe6144", 00:03:32.007 "ffdhe8192" 00:03:32.007 ] 00:03:32.007 } 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "method": "bdev_nvme_set_hotplug", 00:03:32.007 "params": { 00:03:32.007 "period_us": 100000, 00:03:32.007 "enable": false 00:03:32.007 } 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "method": "bdev_wait_for_examine" 00:03:32.007 } 00:03:32.007 ] 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "subsystem": "scsi", 00:03:32.007 "config": null 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "subsystem": "scheduler", 00:03:32.007 "config": [ 00:03:32.007 { 00:03:32.007 "method": "framework_set_scheduler", 00:03:32.007 "params": { 00:03:32.007 "name": "static" 00:03:32.007 } 00:03:32.007 } 00:03:32.007 ] 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "subsystem": "vhost_scsi", 00:03:32.007 "config": [] 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "subsystem": "vhost_blk", 00:03:32.007 "config": [] 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "subsystem": "ublk", 00:03:32.007 "config": [] 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "subsystem": "nbd", 00:03:32.007 "config": [] 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "subsystem": "nvmf", 00:03:32.007 "config": [ 00:03:32.007 { 00:03:32.007 "method": "nvmf_set_config", 00:03:32.007 "params": { 00:03:32.007 "discovery_filter": "match_any", 00:03:32.007 "admin_cmd_passthru": { 00:03:32.007 "identify_ctrlr": false 00:03:32.007 }, 00:03:32.007 "dhchap_digests": [ 00:03:32.007 "sha256", 00:03:32.007 "sha384", 00:03:32.007 "sha512" 00:03:32.007 ], 00:03:32.007 "dhchap_dhgroups": [ 00:03:32.007 "null", 00:03:32.007 "ffdhe2048", 00:03:32.007 "ffdhe3072", 00:03:32.007 "ffdhe4096", 00:03:32.007 "ffdhe6144", 00:03:32.007 "ffdhe8192" 00:03:32.007 ] 00:03:32.007 } 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "method": "nvmf_set_max_subsystems", 00:03:32.007 "params": { 00:03:32.007 "max_subsystems": 1024 00:03:32.007 } 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "method": "nvmf_set_crdt", 00:03:32.007 "params": { 00:03:32.007 "crdt1": 0, 00:03:32.007 "crdt2": 0, 00:03:32.007 "crdt3": 0 00:03:32.007 } 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "method": "nvmf_create_transport", 00:03:32.007 "params": { 00:03:32.007 "trtype": "TCP", 00:03:32.007 "max_queue_depth": 128, 00:03:32.007 "max_io_qpairs_per_ctrlr": 127, 00:03:32.007 "in_capsule_data_size": 4096, 00:03:32.007 "max_io_size": 131072, 00:03:32.007 "io_unit_size": 131072, 00:03:32.007 "max_aq_depth": 128, 00:03:32.007 "num_shared_buffers": 511, 00:03:32.007 "buf_cache_size": 4294967295, 00:03:32.007 "dif_insert_or_strip": false, 00:03:32.007 "zcopy": false, 00:03:32.007 "c2h_success": true, 00:03:32.007 "sock_priority": 0, 00:03:32.007 "abort_timeout_sec": 1, 00:03:32.007 "ack_timeout": 0, 00:03:32.007 "data_wr_pool_size": 0 00:03:32.007 } 00:03:32.007 } 00:03:32.007 ] 00:03:32.007 }, 00:03:32.007 { 00:03:32.007 "subsystem": "iscsi", 00:03:32.007 "config": [ 00:03:32.007 { 00:03:32.007 "method": "iscsi_set_options", 00:03:32.007 "params": { 00:03:32.007 "node_base": "iqn.2016-06.io.spdk", 00:03:32.007 "max_sessions": 128, 00:03:32.007 "max_connections_per_session": 2, 00:03:32.007 "max_queue_depth": 64, 00:03:32.007 "default_time2wait": 2, 00:03:32.007 "default_time2retain": 20, 00:03:32.007 "first_burst_length": 8192, 00:03:32.007 "immediate_data": true, 00:03:32.007 "allow_duplicated_isid": false, 00:03:32.007 "error_recovery_level": 0, 00:03:32.007 "nop_timeout": 60, 00:03:32.007 "nop_in_interval": 30, 00:03:32.007 "disable_chap": false, 00:03:32.007 "require_chap": false, 00:03:32.007 "mutual_chap": false, 00:03:32.007 "chap_group": 0, 00:03:32.007 "max_large_datain_per_connection": 64, 00:03:32.007 "max_r2t_per_connection": 4, 00:03:32.007 "pdu_pool_size": 36864, 00:03:32.007 "immediate_data_pool_size": 16384, 00:03:32.007 "data_out_pool_size": 2048 00:03:32.007 } 00:03:32.007 } 00:03:32.007 ] 00:03:32.007 } 00:03:32.007 ] 00:03:32.007 } 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3169004 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3169004 ']' 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3169004 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3169004 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3169004' 00:03:32.007 killing process with pid 3169004 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3169004 00:03:32.007 11:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3169004 00:03:32.268 11:02:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3169332 00:03:32.268 11:02:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:32.268 11:02:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3169332 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3169332 ']' 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3169332 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3169332 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3169332' 00:03:37.558 killing process with pid 3169332 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3169332 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3169332 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:37.558 00:03:37.558 real 0m6.597s 00:03:37.558 user 0m6.488s 00:03:37.558 sys 0m0.564s 00:03:37.558 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:37.559 ************************************ 00:03:37.559 END TEST skip_rpc_with_json 00:03:37.559 ************************************ 00:03:37.559 11:02:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:37.559 11:02:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:37.559 11:02:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.559 11:02:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.559 ************************************ 00:03:37.559 START TEST skip_rpc_with_delay 00:03:37.559 ************************************ 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.559 [2024-12-06 11:02:43.693396] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:37.559 00:03:37.559 real 0m0.074s 00:03:37.559 user 0m0.053s 00:03:37.559 sys 0m0.021s 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:37.559 11:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:37.559 ************************************ 00:03:37.559 END TEST skip_rpc_with_delay 00:03:37.559 ************************************ 00:03:37.820 11:02:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:37.820 11:02:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:37.820 11:02:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:37.820 11:02:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:37.820 11:02:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.820 11:02:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.820 ************************************ 00:03:37.820 START TEST exit_on_failed_rpc_init 00:03:37.820 ************************************ 00:03:37.820 11:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:37.820 11:02:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3170434 00:03:37.820 11:02:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3170434 00:03:37.820 11:02:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:37.820 11:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3170434 ']' 00:03:37.820 11:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.820 11:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:37.820 11:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.820 11:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:37.820 11:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:37.820 [2024-12-06 11:02:43.847703] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:37.820 [2024-12-06 11:02:43.847774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170434 ] 00:03:37.820 [2024-12-06 11:02:43.930092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.820 [2024-12-06 11:02:43.971885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:38.507 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.771 [2024-12-06 11:02:44.709179] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:38.771 [2024-12-06 11:02:44.709232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170731 ] 00:03:38.771 [2024-12-06 11:02:44.802296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.771 [2024-12-06 11:02:44.838233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:38.771 [2024-12-06 11:02:44.838284] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:38.771 [2024-12-06 11:02:44.838294] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:38.771 [2024-12-06 11:02:44.838301] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3170434 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3170434 ']' 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3170434 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:38.771 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3170434 00:03:39.031 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.031 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.031 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3170434' 00:03:39.031 killing process with pid 3170434 00:03:39.031 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3170434 00:03:39.031 11:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3170434 00:03:39.031 00:03:39.031 real 0m1.360s 00:03:39.031 user 0m1.597s 00:03:39.031 sys 0m0.388s 00:03:39.031 11:02:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.031 11:02:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:39.031 ************************************ 00:03:39.031 END TEST exit_on_failed_rpc_init 00:03:39.031 ************************************ 00:03:39.032 11:02:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.032 00:03:39.032 real 0m13.825s 00:03:39.032 user 0m13.446s 00:03:39.032 sys 0m1.533s 00:03:39.032 11:02:45 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.032 11:02:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.032 ************************************ 00:03:39.032 END TEST skip_rpc 00:03:39.032 ************************************ 00:03:39.292 11:02:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:39.292 11:02:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.292 11:02:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.292 11:02:45 -- common/autotest_common.sh@10 -- # set +x 00:03:39.292 ************************************ 00:03:39.292 START TEST rpc_client 00:03:39.292 ************************************ 00:03:39.292 11:02:45 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:39.292 * Looking for test storage... 00:03:39.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:39.292 11:02:45 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:39.292 11:02:45 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:03:39.292 11:02:45 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:39.292 11:02:45 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:39.292 11:02:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.292 11:02:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.292 11:02:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.292 11:02:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.292 11:02:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.292 11:02:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.292 11:02:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.292 11:02:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.292 11:02:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.292 11:02:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.292 11:02:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:39.293 11:02:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.553 11:02:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.553 11:02:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.553 11:02:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:39.553 11:02:45 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.553 11:02:45 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:39.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.553 --rc genhtml_branch_coverage=1 00:03:39.553 --rc genhtml_function_coverage=1 00:03:39.553 --rc genhtml_legend=1 00:03:39.553 --rc geninfo_all_blocks=1 00:03:39.553 --rc geninfo_unexecuted_blocks=1 00:03:39.553 00:03:39.553 ' 00:03:39.553 11:02:45 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:39.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.553 --rc genhtml_branch_coverage=1 00:03:39.553 --rc genhtml_function_coverage=1 00:03:39.553 --rc genhtml_legend=1 00:03:39.553 --rc geninfo_all_blocks=1 00:03:39.553 --rc geninfo_unexecuted_blocks=1 00:03:39.553 00:03:39.553 ' 00:03:39.553 11:02:45 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:39.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.553 --rc genhtml_branch_coverage=1 00:03:39.553 --rc genhtml_function_coverage=1 00:03:39.553 --rc genhtml_legend=1 00:03:39.553 --rc geninfo_all_blocks=1 00:03:39.553 --rc geninfo_unexecuted_blocks=1 00:03:39.553 00:03:39.553 ' 00:03:39.553 11:02:45 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:39.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.553 --rc genhtml_branch_coverage=1 00:03:39.553 --rc genhtml_function_coverage=1 00:03:39.553 --rc genhtml_legend=1 00:03:39.553 --rc geninfo_all_blocks=1 00:03:39.553 --rc geninfo_unexecuted_blocks=1 00:03:39.553 00:03:39.553 ' 00:03:39.553 11:02:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:39.553 OK 00:03:39.554 11:02:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:39.554 00:03:39.554 real 0m0.226s 00:03:39.554 user 0m0.134s 00:03:39.554 sys 0m0.104s 00:03:39.554 11:02:45 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.554 11:02:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:39.554 ************************************ 00:03:39.554 END TEST rpc_client 00:03:39.554 ************************************ 00:03:39.554 11:02:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:39.554 11:02:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.554 11:02:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.554 11:02:45 -- common/autotest_common.sh@10 -- # set +x 00:03:39.554 ************************************ 00:03:39.554 START TEST json_config 00:03:39.554 ************************************ 00:03:39.554 11:02:45 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:39.554 11:02:45 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:39.554 11:02:45 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:03:39.554 11:02:45 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:39.816 11:02:45 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:39.816 11:02:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.816 11:02:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.816 11:02:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.816 11:02:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.816 11:02:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.816 11:02:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.816 11:02:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.816 11:02:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.816 11:02:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.816 11:02:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.816 11:02:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.816 11:02:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:39.816 11:02:45 json_config -- scripts/common.sh@345 -- # : 1 00:03:39.816 11:02:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.816 11:02:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.816 11:02:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:39.816 11:02:45 json_config -- scripts/common.sh@353 -- # local d=1 00:03:39.816 11:02:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.816 11:02:45 json_config -- scripts/common.sh@355 -- # echo 1 00:03:39.816 11:02:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.816 11:02:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:39.816 11:02:45 json_config -- scripts/common.sh@353 -- # local d=2 00:03:39.816 11:02:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.816 11:02:45 json_config -- scripts/common.sh@355 -- # echo 2 00:03:39.816 11:02:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.816 11:02:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.816 11:02:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.816 11:02:45 json_config -- scripts/common.sh@368 -- # return 0 00:03:39.816 11:02:45 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.816 11:02:45 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.816 --rc genhtml_branch_coverage=1 00:03:39.816 --rc genhtml_function_coverage=1 00:03:39.816 --rc genhtml_legend=1 00:03:39.816 --rc geninfo_all_blocks=1 00:03:39.816 --rc geninfo_unexecuted_blocks=1 00:03:39.816 00:03:39.816 ' 00:03:39.816 11:02:45 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.816 --rc genhtml_branch_coverage=1 00:03:39.816 --rc genhtml_function_coverage=1 00:03:39.816 --rc genhtml_legend=1 00:03:39.816 --rc geninfo_all_blocks=1 00:03:39.816 --rc geninfo_unexecuted_blocks=1 00:03:39.816 00:03:39.816 ' 00:03:39.816 11:02:45 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.816 --rc genhtml_branch_coverage=1 00:03:39.816 --rc genhtml_function_coverage=1 00:03:39.816 --rc genhtml_legend=1 00:03:39.816 --rc geninfo_all_blocks=1 00:03:39.816 --rc geninfo_unexecuted_blocks=1 00:03:39.816 00:03:39.816 ' 00:03:39.816 11:02:45 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.816 --rc genhtml_branch_coverage=1 00:03:39.816 --rc genhtml_function_coverage=1 00:03:39.816 --rc genhtml_legend=1 00:03:39.816 --rc geninfo_all_blocks=1 00:03:39.816 --rc geninfo_unexecuted_blocks=1 00:03:39.816 00:03:39.816 ' 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:39.816 11:02:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:39.816 11:02:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:39.816 11:02:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.816 11:02:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.816 11:02:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.816 11:02:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.816 11:02:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.816 11:02:45 json_config -- paths/export.sh@5 -- # export PATH 00:03:39.816 11:02:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@51 -- # : 0 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:39.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:39.816 11:02:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:39.816 INFO: JSON configuration test init 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:39.816 11:02:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:39.816 11:02:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:39.816 11:02:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:39.816 11:02:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.816 11:02:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:39.816 11:02:45 json_config -- json_config/common.sh@9 -- # local app=target 00:03:39.816 11:02:45 json_config -- json_config/common.sh@10 -- # shift 00:03:39.816 11:02:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:39.816 11:02:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:39.816 11:02:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:39.816 11:02:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:39.816 11:02:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:39.817 11:02:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3171074 00:03:39.817 11:02:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:39.817 Waiting for target to run... 00:03:39.817 11:02:45 json_config -- json_config/common.sh@25 -- # waitforlisten 3171074 /var/tmp/spdk_tgt.sock 00:03:39.817 11:02:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 3171074 ']' 00:03:39.817 11:02:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:39.817 11:02:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:39.817 11:02:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:39.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:39.817 11:02:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:39.817 11:02:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:39.817 11:02:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.817 [2024-12-06 11:02:45.859389] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:39.817 [2024-12-06 11:02:45.859462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171074 ] 00:03:40.388 [2024-12-06 11:02:46.286239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.388 [2024-12-06 11:02:46.320870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.648 11:02:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:40.648 11:02:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:40.649 11:02:46 json_config -- json_config/common.sh@26 -- # echo '' 00:03:40.649 00:03:40.649 11:02:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:40.649 11:02:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:40.649 11:02:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.649 11:02:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.649 11:02:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:40.649 11:02:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:40.649 11:02:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:40.649 11:02:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.649 11:02:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:40.649 11:02:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:40.649 11:02:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:41.221 11:02:47 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:41.221 11:02:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:41.221 11:02:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:41.221 11:02:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.221 11:02:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:41.221 11:02:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:41.221 11:02:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:41.221 11:02:47 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:41.221 11:02:47 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:41.221 11:02:47 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:41.221 11:02:47 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:41.221 11:02:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@54 -- # sort 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:41.481 11:02:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:41.481 11:02:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:41.481 11:02:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:41.481 11:02:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:41.481 11:02:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:41.481 11:02:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:41.742 MallocForNvmf0 00:03:41.742 11:02:47 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:41.742 11:02:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:41.742 MallocForNvmf1 00:03:41.742 11:02:47 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:41.742 11:02:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:42.004 [2024-12-06 11:02:48.021905] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:42.004 11:02:48 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:42.004 11:02:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:42.265 11:02:48 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:42.265 11:02:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:42.265 11:02:48 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:42.265 11:02:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:42.527 11:02:48 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:42.527 11:02:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:42.527 [2024-12-06 11:02:48.660007] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:42.527 11:02:48 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:42.527 11:02:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:42.527 11:02:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.788 11:02:48 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:42.788 11:02:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:42.788 11:02:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.788 11:02:48 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:42.788 11:02:48 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:42.788 11:02:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:42.788 MallocBdevForConfigChangeCheck 00:03:42.788 11:02:48 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:42.788 11:02:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:42.788 11:02:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.050 11:02:48 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:43.050 11:02:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:43.312 11:02:49 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:43.312 INFO: shutting down applications... 00:03:43.312 11:02:49 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:43.312 11:02:49 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:43.312 11:02:49 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:43.312 11:02:49 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:43.573 Calling clear_iscsi_subsystem 00:03:43.573 Calling clear_nvmf_subsystem 00:03:43.573 Calling clear_nbd_subsystem 00:03:43.573 Calling clear_ublk_subsystem 00:03:43.573 Calling clear_vhost_blk_subsystem 00:03:43.573 Calling clear_vhost_scsi_subsystem 00:03:43.573 Calling clear_bdev_subsystem 00:03:43.573 11:02:49 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:43.573 11:02:49 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:43.573 11:02:49 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:43.573 11:02:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:43.573 11:02:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:43.573 11:02:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:44.146 11:02:50 json_config -- json_config/json_config.sh@352 -- # break 00:03:44.146 11:02:50 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:44.146 11:02:50 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:44.146 11:02:50 json_config -- json_config/common.sh@31 -- # local app=target 00:03:44.146 11:02:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:44.146 11:02:50 json_config -- json_config/common.sh@35 -- # [[ -n 3171074 ]] 00:03:44.146 11:02:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3171074 00:03:44.146 11:02:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:44.146 11:02:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:44.146 11:02:50 json_config -- json_config/common.sh@41 -- # kill -0 3171074 00:03:44.146 11:02:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:44.407 11:02:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:44.407 11:02:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:44.407 11:02:50 json_config -- json_config/common.sh@41 -- # kill -0 3171074 00:03:44.407 11:02:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:44.407 11:02:50 json_config -- json_config/common.sh@43 -- # break 00:03:44.407 11:02:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:44.408 11:02:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:44.408 SPDK target shutdown done 00:03:44.408 11:02:50 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:44.408 INFO: relaunching applications... 00:03:44.408 11:02:50 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:44.408 11:02:50 json_config -- json_config/common.sh@9 -- # local app=target 00:03:44.408 11:02:50 json_config -- json_config/common.sh@10 -- # shift 00:03:44.408 11:02:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:44.408 11:02:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:44.408 11:02:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:44.408 11:02:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.408 11:02:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.408 11:02:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3172043 00:03:44.408 11:02:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:44.408 Waiting for target to run... 00:03:44.408 11:02:50 json_config -- json_config/common.sh@25 -- # waitforlisten 3172043 /var/tmp/spdk_tgt.sock 00:03:44.408 11:02:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:44.408 11:02:50 json_config -- common/autotest_common.sh@835 -- # '[' -z 3172043 ']' 00:03:44.408 11:02:50 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:44.408 11:02:50 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:44.408 11:02:50 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:44.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:44.408 11:02:50 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:44.408 11:02:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.668 [2024-12-06 11:02:50.627071] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:44.669 [2024-12-06 11:02:50.627143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3172043 ] 00:03:44.930 [2024-12-06 11:02:50.892792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.930 [2024-12-06 11:02:50.921034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.501 [2024-12-06 11:02:51.444113] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:45.501 [2024-12-06 11:02:51.476494] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:45.501 11:02:51 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:45.501 11:02:51 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:45.501 11:02:51 json_config -- json_config/common.sh@26 -- # echo '' 00:03:45.501 00:03:45.501 11:02:51 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:45.501 11:02:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:45.501 INFO: Checking if target configuration is the same... 00:03:45.501 11:02:51 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.501 11:02:51 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:45.501 11:02:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:45.501 + '[' 2 -ne 2 ']' 00:03:45.501 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:45.501 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:45.501 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.501 +++ basename /dev/fd/62 00:03:45.501 ++ mktemp /tmp/62.XXX 00:03:45.501 + tmp_file_1=/tmp/62.OQm 00:03:45.501 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.501 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:45.501 + tmp_file_2=/tmp/spdk_tgt_config.json.JDM 00:03:45.501 + ret=0 00:03:45.501 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:45.763 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:45.763 + diff -u /tmp/62.OQm /tmp/spdk_tgt_config.json.JDM 00:03:45.763 + echo 'INFO: JSON config files are the same' 00:03:45.763 INFO: JSON config files are the same 00:03:45.763 + rm /tmp/62.OQm /tmp/spdk_tgt_config.json.JDM 00:03:45.763 + exit 0 00:03:45.763 11:02:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:45.763 11:02:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:45.763 INFO: changing configuration and checking if this can be detected... 00:03:45.763 11:02:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:45.763 11:02:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:46.023 11:02:52 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:46.023 11:02:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:46.023 11:02:52 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:46.023 + '[' 2 -ne 2 ']' 00:03:46.023 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:46.024 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:46.024 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:46.024 +++ basename /dev/fd/62 00:03:46.024 ++ mktemp /tmp/62.XXX 00:03:46.024 + tmp_file_1=/tmp/62.hvR 00:03:46.024 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:46.024 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:46.024 + tmp_file_2=/tmp/spdk_tgt_config.json.g7H 00:03:46.024 + ret=0 00:03:46.024 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:46.283 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:46.283 + diff -u /tmp/62.hvR /tmp/spdk_tgt_config.json.g7H 00:03:46.283 + ret=1 00:03:46.283 + echo '=== Start of file: /tmp/62.hvR ===' 00:03:46.283 + cat /tmp/62.hvR 00:03:46.543 + echo '=== End of file: /tmp/62.hvR ===' 00:03:46.543 + echo '' 00:03:46.543 + echo '=== Start of file: /tmp/spdk_tgt_config.json.g7H ===' 00:03:46.543 + cat /tmp/spdk_tgt_config.json.g7H 00:03:46.543 + echo '=== End of file: /tmp/spdk_tgt_config.json.g7H ===' 00:03:46.543 + echo '' 00:03:46.543 + rm /tmp/62.hvR /tmp/spdk_tgt_config.json.g7H 00:03:46.543 + exit 1 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:46.543 INFO: configuration change detected. 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 3172043 ]] 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.543 11:02:52 json_config -- json_config/json_config.sh@330 -- # killprocess 3172043 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@954 -- # '[' -z 3172043 ']' 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@958 -- # kill -0 3172043 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@959 -- # uname 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3172043 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3172043' 00:03:46.543 killing process with pid 3172043 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@973 -- # kill 3172043 00:03:46.543 11:02:52 json_config -- common/autotest_common.sh@978 -- # wait 3172043 00:03:46.804 11:02:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:46.804 11:02:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:46.804 11:02:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:46.804 11:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.804 11:02:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:46.804 11:02:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:46.804 INFO: Success 00:03:46.804 00:03:46.804 real 0m7.323s 00:03:46.804 user 0m8.767s 00:03:46.804 sys 0m2.014s 00:03:46.804 11:02:52 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.804 11:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.804 ************************************ 00:03:46.804 END TEST json_config 00:03:46.804 ************************************ 00:03:46.804 11:02:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:46.804 11:02:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.804 11:02:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.804 11:02:52 -- common/autotest_common.sh@10 -- # set +x 00:03:46.804 ************************************ 00:03:46.804 START TEST json_config_extra_key 00:03:46.804 ************************************ 00:03:46.804 11:02:52 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:47.067 11:02:53 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:47.067 11:02:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:03:47.067 11:02:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:47.067 11:02:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:47.067 11:02:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:47.067 11:02:53 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.067 11:02:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.067 --rc genhtml_branch_coverage=1 00:03:47.067 --rc genhtml_function_coverage=1 00:03:47.067 --rc genhtml_legend=1 00:03:47.067 --rc geninfo_all_blocks=1 00:03:47.067 --rc geninfo_unexecuted_blocks=1 00:03:47.067 00:03:47.067 ' 00:03:47.067 11:02:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.067 --rc genhtml_branch_coverage=1 00:03:47.067 --rc genhtml_function_coverage=1 00:03:47.067 --rc genhtml_legend=1 00:03:47.067 --rc geninfo_all_blocks=1 00:03:47.067 --rc geninfo_unexecuted_blocks=1 00:03:47.067 00:03:47.067 ' 00:03:47.067 11:02:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.067 --rc genhtml_branch_coverage=1 00:03:47.067 --rc genhtml_function_coverage=1 00:03:47.067 --rc genhtml_legend=1 00:03:47.067 --rc geninfo_all_blocks=1 00:03:47.067 --rc geninfo_unexecuted_blocks=1 00:03:47.067 00:03:47.067 ' 00:03:47.067 11:02:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:47.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.067 --rc genhtml_branch_coverage=1 00:03:47.067 --rc genhtml_function_coverage=1 00:03:47.067 --rc genhtml_legend=1 00:03:47.067 --rc geninfo_all_blocks=1 00:03:47.067 --rc geninfo_unexecuted_blocks=1 00:03:47.067 00:03:47.067 ' 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:47.068 11:02:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:47.068 11:02:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:47.068 11:02:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:47.068 11:02:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:47.068 11:02:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.068 11:02:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.068 11:02:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.068 11:02:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:47.068 11:02:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:47.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:47.068 11:02:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:47.068 INFO: launching applications... 00:03:47.068 11:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:47.068 11:02:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:47.068 11:02:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:47.068 11:02:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:47.068 11:02:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:47.068 11:02:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:47.068 11:02:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.068 11:02:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.068 11:02:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3172795 00:03:47.068 11:02:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:47.068 Waiting for target to run... 00:03:47.068 11:02:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3172795 /var/tmp/spdk_tgt.sock 00:03:47.068 11:02:53 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3172795 ']' 00:03:47.068 11:02:53 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:47.068 11:02:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:47.068 11:02:53 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:47.068 11:02:53 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:47.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:47.068 11:02:53 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:47.068 11:02:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:47.068 [2024-12-06 11:02:53.226242] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:47.068 [2024-12-06 11:02:53.226311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3172795 ] 00:03:47.639 [2024-12-06 11:02:53.499299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.639 [2024-12-06 11:02:53.528258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.901 11:02:54 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:47.901 11:02:54 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:47.901 11:02:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:47.901 00:03:47.901 11:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:47.901 INFO: shutting down applications... 00:03:47.901 11:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:47.901 11:02:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:47.901 11:02:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:47.901 11:02:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3172795 ]] 00:03:47.901 11:02:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3172795 00:03:47.901 11:02:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:47.901 11:02:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:47.901 11:02:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3172795 00:03:47.901 11:02:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:48.472 11:02:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:48.472 11:02:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:48.472 11:02:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3172795 00:03:48.472 11:02:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:48.472 11:02:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:48.472 11:02:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:48.472 11:02:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:48.472 SPDK target shutdown done 00:03:48.472 11:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:48.472 Success 00:03:48.472 00:03:48.472 real 0m1.557s 00:03:48.472 user 0m1.213s 00:03:48.472 sys 0m0.388s 00:03:48.472 11:02:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.472 11:02:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:48.472 ************************************ 00:03:48.472 END TEST json_config_extra_key 00:03:48.472 ************************************ 00:03:48.472 11:02:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:48.472 11:02:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.472 11:02:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.472 11:02:54 -- common/autotest_common.sh@10 -- # set +x 00:03:48.472 ************************************ 00:03:48.472 START TEST alias_rpc 00:03:48.472 ************************************ 00:03:48.472 11:02:54 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:48.734 * Looking for test storage... 00:03:48.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.734 11:02:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:48.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.734 --rc genhtml_branch_coverage=1 00:03:48.734 --rc genhtml_function_coverage=1 00:03:48.734 --rc genhtml_legend=1 00:03:48.734 --rc geninfo_all_blocks=1 00:03:48.734 --rc geninfo_unexecuted_blocks=1 00:03:48.734 00:03:48.734 ' 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:48.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.734 --rc genhtml_branch_coverage=1 00:03:48.734 --rc genhtml_function_coverage=1 00:03:48.734 --rc genhtml_legend=1 00:03:48.734 --rc geninfo_all_blocks=1 00:03:48.734 --rc geninfo_unexecuted_blocks=1 00:03:48.734 00:03:48.734 ' 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:48.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.734 --rc genhtml_branch_coverage=1 00:03:48.734 --rc genhtml_function_coverage=1 00:03:48.734 --rc genhtml_legend=1 00:03:48.734 --rc geninfo_all_blocks=1 00:03:48.734 --rc geninfo_unexecuted_blocks=1 00:03:48.734 00:03:48.734 ' 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:48.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.734 --rc genhtml_branch_coverage=1 00:03:48.734 --rc genhtml_function_coverage=1 00:03:48.734 --rc genhtml_legend=1 00:03:48.734 --rc geninfo_all_blocks=1 00:03:48.734 --rc geninfo_unexecuted_blocks=1 00:03:48.734 00:03:48.734 ' 00:03:48.734 11:02:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:48.734 11:02:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3173191 00:03:48.734 11:02:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3173191 00:03:48.734 11:02:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3173191 ']' 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:48.734 11:02:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.734 [2024-12-06 11:02:54.851353] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:48.734 [2024-12-06 11:02:54.851425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173191 ] 00:03:48.996 [2024-12-06 11:02:54.937273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.996 [2024-12-06 11:02:54.979251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.565 11:02:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:49.565 11:02:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:49.565 11:02:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:49.825 11:02:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3173191 00:03:49.825 11:02:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3173191 ']' 00:03:49.825 11:02:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3173191 00:03:49.825 11:02:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:03:49.825 11:02:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:49.825 11:02:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3173191 00:03:49.825 11:02:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:49.825 11:02:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:49.825 11:02:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3173191' 00:03:49.825 killing process with pid 3173191 00:03:49.825 11:02:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 3173191 00:03:49.825 11:02:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 3173191 00:03:50.085 00:03:50.085 real 0m1.544s 00:03:50.085 user 0m1.709s 00:03:50.085 sys 0m0.428s 00:03:50.085 11:02:56 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.085 11:02:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.085 ************************************ 00:03:50.085 END TEST alias_rpc 00:03:50.085 ************************************ 00:03:50.085 11:02:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:50.085 11:02:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:50.085 11:02:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.085 11:02:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.085 11:02:56 -- common/autotest_common.sh@10 -- # set +x 00:03:50.085 ************************************ 00:03:50.085 START TEST spdkcli_tcp 00:03:50.085 ************************************ 00:03:50.085 11:02:56 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:50.347 * Looking for test storage... 00:03:50.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:50.347 11:02:56 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:50.347 11:02:56 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:03:50.347 11:02:56 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:50.347 11:02:56 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.347 11:02:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:50.347 11:02:56 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.347 11:02:56 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:50.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.347 --rc genhtml_branch_coverage=1 00:03:50.347 --rc genhtml_function_coverage=1 00:03:50.347 --rc genhtml_legend=1 00:03:50.347 --rc geninfo_all_blocks=1 00:03:50.347 --rc geninfo_unexecuted_blocks=1 00:03:50.347 00:03:50.347 ' 00:03:50.347 11:02:56 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:50.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.347 --rc genhtml_branch_coverage=1 00:03:50.347 --rc genhtml_function_coverage=1 00:03:50.347 --rc genhtml_legend=1 00:03:50.347 --rc geninfo_all_blocks=1 00:03:50.347 --rc geninfo_unexecuted_blocks=1 00:03:50.347 00:03:50.347 ' 00:03:50.347 11:02:56 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:50.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.347 --rc genhtml_branch_coverage=1 00:03:50.347 --rc genhtml_function_coverage=1 00:03:50.347 --rc genhtml_legend=1 00:03:50.347 --rc geninfo_all_blocks=1 00:03:50.347 --rc geninfo_unexecuted_blocks=1 00:03:50.347 00:03:50.347 ' 00:03:50.347 11:02:56 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:50.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.347 --rc genhtml_branch_coverage=1 00:03:50.347 --rc genhtml_function_coverage=1 00:03:50.347 --rc genhtml_legend=1 00:03:50.347 --rc geninfo_all_blocks=1 00:03:50.347 --rc geninfo_unexecuted_blocks=1 00:03:50.347 00:03:50.347 ' 00:03:50.347 11:02:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:50.347 11:02:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:50.347 11:02:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:50.347 11:02:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:50.347 11:02:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:50.347 11:02:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:50.348 11:02:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:50.348 11:02:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.348 11:02:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:50.348 11:02:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3173593 00:03:50.348 11:02:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3173593 00:03:50.348 11:02:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3173593 ']' 00:03:50.348 11:02:56 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.348 11:02:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.348 11:02:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.348 11:02:56 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.348 11:02:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:50.348 11:02:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:50.348 [2024-12-06 11:02:56.449840] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:50.348 [2024-12-06 11:02:56.449936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173593 ] 00:03:50.609 [2024-12-06 11:02:56.531899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:50.609 [2024-12-06 11:02:56.574910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:50.609 [2024-12-06 11:02:56.574932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.181 11:02:57 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:51.181 11:02:57 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:03:51.181 11:02:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3173616 00:03:51.181 11:02:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:51.181 11:02:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:51.444 [ 00:03:51.444 "bdev_malloc_delete", 00:03:51.444 "bdev_malloc_create", 00:03:51.444 "bdev_null_resize", 00:03:51.444 "bdev_null_delete", 00:03:51.444 "bdev_null_create", 00:03:51.444 "bdev_nvme_cuse_unregister", 00:03:51.444 "bdev_nvme_cuse_register", 00:03:51.444 "bdev_opal_new_user", 00:03:51.444 "bdev_opal_set_lock_state", 00:03:51.444 "bdev_opal_delete", 00:03:51.444 "bdev_opal_get_info", 00:03:51.444 "bdev_opal_create", 00:03:51.444 "bdev_nvme_opal_revert", 00:03:51.444 "bdev_nvme_opal_init", 00:03:51.444 "bdev_nvme_send_cmd", 00:03:51.444 "bdev_nvme_set_keys", 00:03:51.444 "bdev_nvme_get_path_iostat", 00:03:51.444 "bdev_nvme_get_mdns_discovery_info", 00:03:51.444 "bdev_nvme_stop_mdns_discovery", 00:03:51.444 "bdev_nvme_start_mdns_discovery", 00:03:51.444 "bdev_nvme_set_multipath_policy", 00:03:51.444 "bdev_nvme_set_preferred_path", 00:03:51.444 "bdev_nvme_get_io_paths", 00:03:51.444 "bdev_nvme_remove_error_injection", 00:03:51.444 "bdev_nvme_add_error_injection", 00:03:51.444 "bdev_nvme_get_discovery_info", 00:03:51.444 "bdev_nvme_stop_discovery", 00:03:51.444 "bdev_nvme_start_discovery", 00:03:51.444 "bdev_nvme_get_controller_health_info", 00:03:51.444 "bdev_nvme_disable_controller", 00:03:51.444 "bdev_nvme_enable_controller", 00:03:51.444 "bdev_nvme_reset_controller", 00:03:51.444 "bdev_nvme_get_transport_statistics", 00:03:51.444 "bdev_nvme_apply_firmware", 00:03:51.444 "bdev_nvme_detach_controller", 00:03:51.444 "bdev_nvme_get_controllers", 00:03:51.444 "bdev_nvme_attach_controller", 00:03:51.444 "bdev_nvme_set_hotplug", 00:03:51.444 "bdev_nvme_set_options", 00:03:51.444 "bdev_passthru_delete", 00:03:51.444 "bdev_passthru_create", 00:03:51.444 "bdev_lvol_set_parent_bdev", 00:03:51.444 "bdev_lvol_set_parent", 00:03:51.444 "bdev_lvol_check_shallow_copy", 00:03:51.444 "bdev_lvol_start_shallow_copy", 00:03:51.444 "bdev_lvol_grow_lvstore", 00:03:51.444 "bdev_lvol_get_lvols", 00:03:51.444 "bdev_lvol_get_lvstores", 00:03:51.444 "bdev_lvol_delete", 00:03:51.444 "bdev_lvol_set_read_only", 00:03:51.444 "bdev_lvol_resize", 00:03:51.444 "bdev_lvol_decouple_parent", 00:03:51.444 "bdev_lvol_inflate", 00:03:51.444 "bdev_lvol_rename", 00:03:51.444 "bdev_lvol_clone_bdev", 00:03:51.444 "bdev_lvol_clone", 00:03:51.444 "bdev_lvol_snapshot", 00:03:51.444 "bdev_lvol_create", 00:03:51.444 "bdev_lvol_delete_lvstore", 00:03:51.444 "bdev_lvol_rename_lvstore", 00:03:51.444 "bdev_lvol_create_lvstore", 00:03:51.444 "bdev_raid_set_options", 00:03:51.444 "bdev_raid_remove_base_bdev", 00:03:51.444 "bdev_raid_add_base_bdev", 00:03:51.444 "bdev_raid_delete", 00:03:51.444 "bdev_raid_create", 00:03:51.444 "bdev_raid_get_bdevs", 00:03:51.444 "bdev_error_inject_error", 00:03:51.444 "bdev_error_delete", 00:03:51.444 "bdev_error_create", 00:03:51.444 "bdev_split_delete", 00:03:51.444 "bdev_split_create", 00:03:51.444 "bdev_delay_delete", 00:03:51.444 "bdev_delay_create", 00:03:51.444 "bdev_delay_update_latency", 00:03:51.444 "bdev_zone_block_delete", 00:03:51.444 "bdev_zone_block_create", 00:03:51.444 "blobfs_create", 00:03:51.444 "blobfs_detect", 00:03:51.444 "blobfs_set_cache_size", 00:03:51.444 "bdev_aio_delete", 00:03:51.444 "bdev_aio_rescan", 00:03:51.444 "bdev_aio_create", 00:03:51.444 "bdev_ftl_set_property", 00:03:51.444 "bdev_ftl_get_properties", 00:03:51.444 "bdev_ftl_get_stats", 00:03:51.444 "bdev_ftl_unmap", 00:03:51.444 "bdev_ftl_unload", 00:03:51.444 "bdev_ftl_delete", 00:03:51.444 "bdev_ftl_load", 00:03:51.444 "bdev_ftl_create", 00:03:51.444 "bdev_virtio_attach_controller", 00:03:51.444 "bdev_virtio_scsi_get_devices", 00:03:51.444 "bdev_virtio_detach_controller", 00:03:51.444 "bdev_virtio_blk_set_hotplug", 00:03:51.444 "bdev_iscsi_delete", 00:03:51.444 "bdev_iscsi_create", 00:03:51.444 "bdev_iscsi_set_options", 00:03:51.444 "accel_error_inject_error", 00:03:51.444 "ioat_scan_accel_module", 00:03:51.444 "dsa_scan_accel_module", 00:03:51.444 "iaa_scan_accel_module", 00:03:51.444 "vfu_virtio_create_fs_endpoint", 00:03:51.444 "vfu_virtio_create_scsi_endpoint", 00:03:51.444 "vfu_virtio_scsi_remove_target", 00:03:51.444 "vfu_virtio_scsi_add_target", 00:03:51.444 "vfu_virtio_create_blk_endpoint", 00:03:51.444 "vfu_virtio_delete_endpoint", 00:03:51.444 "keyring_file_remove_key", 00:03:51.444 "keyring_file_add_key", 00:03:51.444 "keyring_linux_set_options", 00:03:51.444 "fsdev_aio_delete", 00:03:51.444 "fsdev_aio_create", 00:03:51.444 "iscsi_get_histogram", 00:03:51.444 "iscsi_enable_histogram", 00:03:51.444 "iscsi_set_options", 00:03:51.444 "iscsi_get_auth_groups", 00:03:51.444 "iscsi_auth_group_remove_secret", 00:03:51.444 "iscsi_auth_group_add_secret", 00:03:51.444 "iscsi_delete_auth_group", 00:03:51.444 "iscsi_create_auth_group", 00:03:51.444 "iscsi_set_discovery_auth", 00:03:51.444 "iscsi_get_options", 00:03:51.444 "iscsi_target_node_request_logout", 00:03:51.444 "iscsi_target_node_set_redirect", 00:03:51.444 "iscsi_target_node_set_auth", 00:03:51.444 "iscsi_target_node_add_lun", 00:03:51.444 "iscsi_get_stats", 00:03:51.444 "iscsi_get_connections", 00:03:51.445 "iscsi_portal_group_set_auth", 00:03:51.445 "iscsi_start_portal_group", 00:03:51.445 "iscsi_delete_portal_group", 00:03:51.445 "iscsi_create_portal_group", 00:03:51.445 "iscsi_get_portal_groups", 00:03:51.445 "iscsi_delete_target_node", 00:03:51.445 "iscsi_target_node_remove_pg_ig_maps", 00:03:51.445 "iscsi_target_node_add_pg_ig_maps", 00:03:51.445 "iscsi_create_target_node", 00:03:51.445 "iscsi_get_target_nodes", 00:03:51.445 "iscsi_delete_initiator_group", 00:03:51.445 "iscsi_initiator_group_remove_initiators", 00:03:51.445 "iscsi_initiator_group_add_initiators", 00:03:51.445 "iscsi_create_initiator_group", 00:03:51.445 "iscsi_get_initiator_groups", 00:03:51.445 "nvmf_set_crdt", 00:03:51.445 "nvmf_set_config", 00:03:51.445 "nvmf_set_max_subsystems", 00:03:51.445 "nvmf_stop_mdns_prr", 00:03:51.445 "nvmf_publish_mdns_prr", 00:03:51.445 "nvmf_subsystem_get_listeners", 00:03:51.445 "nvmf_subsystem_get_qpairs", 00:03:51.445 "nvmf_subsystem_get_controllers", 00:03:51.445 "nvmf_get_stats", 00:03:51.445 "nvmf_get_transports", 00:03:51.445 "nvmf_create_transport", 00:03:51.445 "nvmf_get_targets", 00:03:51.445 "nvmf_delete_target", 00:03:51.445 "nvmf_create_target", 00:03:51.445 "nvmf_subsystem_allow_any_host", 00:03:51.445 "nvmf_subsystem_set_keys", 00:03:51.445 "nvmf_discovery_referral_remove_host", 00:03:51.445 "nvmf_discovery_referral_add_host", 00:03:51.445 "nvmf_subsystem_remove_host", 00:03:51.445 "nvmf_subsystem_add_host", 00:03:51.445 "nvmf_ns_remove_host", 00:03:51.445 "nvmf_ns_add_host", 00:03:51.445 "nvmf_subsystem_remove_ns", 00:03:51.445 "nvmf_subsystem_set_ns_ana_group", 00:03:51.445 "nvmf_subsystem_add_ns", 00:03:51.445 "nvmf_subsystem_listener_set_ana_state", 00:03:51.445 "nvmf_discovery_get_referrals", 00:03:51.445 "nvmf_discovery_remove_referral", 00:03:51.445 "nvmf_discovery_add_referral", 00:03:51.445 "nvmf_subsystem_remove_listener", 00:03:51.445 "nvmf_subsystem_add_listener", 00:03:51.445 "nvmf_delete_subsystem", 00:03:51.445 "nvmf_create_subsystem", 00:03:51.445 "nvmf_get_subsystems", 00:03:51.445 "env_dpdk_get_mem_stats", 00:03:51.445 "nbd_get_disks", 00:03:51.445 "nbd_stop_disk", 00:03:51.445 "nbd_start_disk", 00:03:51.445 "ublk_recover_disk", 00:03:51.445 "ublk_get_disks", 00:03:51.445 "ublk_stop_disk", 00:03:51.445 "ublk_start_disk", 00:03:51.445 "ublk_destroy_target", 00:03:51.445 "ublk_create_target", 00:03:51.445 "virtio_blk_create_transport", 00:03:51.445 "virtio_blk_get_transports", 00:03:51.445 "vhost_controller_set_coalescing", 00:03:51.445 "vhost_get_controllers", 00:03:51.445 "vhost_delete_controller", 00:03:51.445 "vhost_create_blk_controller", 00:03:51.445 "vhost_scsi_controller_remove_target", 00:03:51.445 "vhost_scsi_controller_add_target", 00:03:51.445 "vhost_start_scsi_controller", 00:03:51.445 "vhost_create_scsi_controller", 00:03:51.445 "thread_set_cpumask", 00:03:51.445 "scheduler_set_options", 00:03:51.445 "framework_get_governor", 00:03:51.445 "framework_get_scheduler", 00:03:51.445 "framework_set_scheduler", 00:03:51.445 "framework_get_reactors", 00:03:51.445 "thread_get_io_channels", 00:03:51.445 "thread_get_pollers", 00:03:51.445 "thread_get_stats", 00:03:51.445 "framework_monitor_context_switch", 00:03:51.445 "spdk_kill_instance", 00:03:51.445 "log_enable_timestamps", 00:03:51.445 "log_get_flags", 00:03:51.445 "log_clear_flag", 00:03:51.445 "log_set_flag", 00:03:51.445 "log_get_level", 00:03:51.445 "log_set_level", 00:03:51.445 "log_get_print_level", 00:03:51.445 "log_set_print_level", 00:03:51.445 "framework_enable_cpumask_locks", 00:03:51.445 "framework_disable_cpumask_locks", 00:03:51.445 "framework_wait_init", 00:03:51.445 "framework_start_init", 00:03:51.445 "scsi_get_devices", 00:03:51.445 "bdev_get_histogram", 00:03:51.445 "bdev_enable_histogram", 00:03:51.445 "bdev_set_qos_limit", 00:03:51.445 "bdev_set_qd_sampling_period", 00:03:51.445 "bdev_get_bdevs", 00:03:51.445 "bdev_reset_iostat", 00:03:51.445 "bdev_get_iostat", 00:03:51.445 "bdev_examine", 00:03:51.445 "bdev_wait_for_examine", 00:03:51.445 "bdev_set_options", 00:03:51.445 "accel_get_stats", 00:03:51.445 "accel_set_options", 00:03:51.445 "accel_set_driver", 00:03:51.445 "accel_crypto_key_destroy", 00:03:51.445 "accel_crypto_keys_get", 00:03:51.445 "accel_crypto_key_create", 00:03:51.445 "accel_assign_opc", 00:03:51.445 "accel_get_module_info", 00:03:51.445 "accel_get_opc_assignments", 00:03:51.445 "vmd_rescan", 00:03:51.445 "vmd_remove_device", 00:03:51.445 "vmd_enable", 00:03:51.445 "sock_get_default_impl", 00:03:51.445 "sock_set_default_impl", 00:03:51.445 "sock_impl_set_options", 00:03:51.445 "sock_impl_get_options", 00:03:51.445 "iobuf_get_stats", 00:03:51.445 "iobuf_set_options", 00:03:51.445 "keyring_get_keys", 00:03:51.445 "vfu_tgt_set_base_path", 00:03:51.445 "framework_get_pci_devices", 00:03:51.445 "framework_get_config", 00:03:51.445 "framework_get_subsystems", 00:03:51.445 "fsdev_set_opts", 00:03:51.445 "fsdev_get_opts", 00:03:51.445 "trace_get_info", 00:03:51.445 "trace_get_tpoint_group_mask", 00:03:51.445 "trace_disable_tpoint_group", 00:03:51.445 "trace_enable_tpoint_group", 00:03:51.445 "trace_clear_tpoint_mask", 00:03:51.445 "trace_set_tpoint_mask", 00:03:51.445 "notify_get_notifications", 00:03:51.445 "notify_get_types", 00:03:51.445 "spdk_get_version", 00:03:51.445 "rpc_get_methods" 00:03:51.445 ] 00:03:51.445 11:02:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:51.445 11:02:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:51.445 11:02:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3173593 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3173593 ']' 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3173593 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3173593 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3173593' 00:03:51.445 killing process with pid 3173593 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3173593 00:03:51.445 11:02:57 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3173593 00:03:51.707 00:03:51.707 real 0m1.524s 00:03:51.707 user 0m2.814s 00:03:51.707 sys 0m0.440s 00:03:51.707 11:02:57 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.707 11:02:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:51.707 ************************************ 00:03:51.707 END TEST spdkcli_tcp 00:03:51.707 ************************************ 00:03:51.707 11:02:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:51.707 11:02:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.707 11:02:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.707 11:02:57 -- common/autotest_common.sh@10 -- # set +x 00:03:51.707 ************************************ 00:03:51.707 START TEST dpdk_mem_utility 00:03:51.707 ************************************ 00:03:51.707 11:02:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:51.967 * Looking for test storage... 00:03:51.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:51.967 11:02:57 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:51.967 11:02:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:03:51.967 11:02:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:51.967 11:02:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.967 11:02:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:51.967 11:02:57 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.967 11:02:57 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:51.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.967 --rc genhtml_branch_coverage=1 00:03:51.967 --rc genhtml_function_coverage=1 00:03:51.967 --rc genhtml_legend=1 00:03:51.967 --rc geninfo_all_blocks=1 00:03:51.967 --rc geninfo_unexecuted_blocks=1 00:03:51.967 00:03:51.967 ' 00:03:51.967 11:02:57 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:51.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.967 --rc genhtml_branch_coverage=1 00:03:51.967 --rc genhtml_function_coverage=1 00:03:51.967 --rc genhtml_legend=1 00:03:51.967 --rc geninfo_all_blocks=1 00:03:51.967 --rc geninfo_unexecuted_blocks=1 00:03:51.967 00:03:51.967 ' 00:03:51.967 11:02:57 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:51.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.967 --rc genhtml_branch_coverage=1 00:03:51.967 --rc genhtml_function_coverage=1 00:03:51.967 --rc genhtml_legend=1 00:03:51.967 --rc geninfo_all_blocks=1 00:03:51.967 --rc geninfo_unexecuted_blocks=1 00:03:51.967 00:03:51.967 ' 00:03:51.967 11:02:57 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:51.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.968 --rc genhtml_branch_coverage=1 00:03:51.968 --rc genhtml_function_coverage=1 00:03:51.968 --rc genhtml_legend=1 00:03:51.968 --rc geninfo_all_blocks=1 00:03:51.968 --rc geninfo_unexecuted_blocks=1 00:03:51.968 00:03:51.968 ' 00:03:51.968 11:02:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:51.968 11:02:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3174004 00:03:51.968 11:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3174004 00:03:51.968 11:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.968 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3174004 ']' 00:03:51.968 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.968 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:51.968 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.968 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:51.968 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:51.968 [2024-12-06 11:02:58.070053] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:51.968 [2024-12-06 11:02:58.070125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174004 ] 00:03:52.228 [2024-12-06 11:02:58.148410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.229 [2024-12-06 11:02:58.184238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.800 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:52.800 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:03:52.800 11:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:52.800 11:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:52.800 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.800 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:52.800 { 00:03:52.800 "filename": "/tmp/spdk_mem_dump.txt" 00:03:52.800 } 00:03:52.800 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.800 11:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:52.800 DPDK memory size 818.000000 MiB in 1 heap(s) 00:03:52.800 1 heaps totaling size 818.000000 MiB 00:03:52.800 size: 818.000000 MiB heap id: 0 00:03:52.800 end heaps---------- 00:03:52.800 9 mempools totaling size 603.782043 MiB 00:03:52.800 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:52.800 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:52.800 size: 100.555481 MiB name: bdev_io_3174004 00:03:52.801 size: 50.003479 MiB name: msgpool_3174004 00:03:52.801 size: 36.509338 MiB name: fsdev_io_3174004 00:03:52.801 size: 21.763794 MiB name: PDU_Pool 00:03:52.801 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:52.801 size: 4.133484 MiB name: evtpool_3174004 00:03:52.801 size: 0.026123 MiB name: Session_Pool 00:03:52.801 end mempools------- 00:03:52.801 6 memzones totaling size 4.142822 MiB 00:03:52.801 size: 1.000366 MiB name: RG_ring_0_3174004 00:03:52.801 size: 1.000366 MiB name: RG_ring_1_3174004 00:03:52.801 size: 1.000366 MiB name: RG_ring_4_3174004 00:03:52.801 size: 1.000366 MiB name: RG_ring_5_3174004 00:03:52.801 size: 0.125366 MiB name: RG_ring_2_3174004 00:03:52.801 size: 0.015991 MiB name: RG_ring_3_3174004 00:03:52.801 end memzones------- 00:03:52.801 11:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:52.801 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:03:52.801 list of free elements. size: 10.852478 MiB 00:03:52.801 element at address: 0x200019200000 with size: 0.999878 MiB 00:03:52.801 element at address: 0x200019400000 with size: 0.999878 MiB 00:03:52.801 element at address: 0x200000400000 with size: 0.998535 MiB 00:03:52.801 element at address: 0x200032000000 with size: 0.994446 MiB 00:03:52.801 element at address: 0x200006400000 with size: 0.959839 MiB 00:03:52.801 element at address: 0x200012c00000 with size: 0.944275 MiB 00:03:52.801 element at address: 0x200019600000 with size: 0.936584 MiB 00:03:52.801 element at address: 0x200000200000 with size: 0.717346 MiB 00:03:52.801 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:03:52.801 element at address: 0x200000c00000 with size: 0.495422 MiB 00:03:52.801 element at address: 0x20000a600000 with size: 0.490723 MiB 00:03:52.801 element at address: 0x200019800000 with size: 0.485657 MiB 00:03:52.801 element at address: 0x200003e00000 with size: 0.481934 MiB 00:03:52.801 element at address: 0x200028200000 with size: 0.410034 MiB 00:03:52.801 element at address: 0x200000800000 with size: 0.355042 MiB 00:03:52.801 list of standard malloc elements. size: 199.218628 MiB 00:03:52.801 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:03:52.801 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:03:52.801 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:03:52.801 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:03:52.801 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:03:52.801 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:52.801 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:03:52.801 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:52.801 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:03:52.801 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20000085b040 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20000085f300 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20000087f680 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:03:52.801 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:03:52.801 element at address: 0x200000cff000 with size: 0.000183 MiB 00:03:52.801 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:03:52.801 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:03:52.801 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:03:52.801 element at address: 0x200003efb980 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:03:52.801 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:03:52.801 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:03:52.801 element at address: 0x200028268f80 with size: 0.000183 MiB 00:03:52.801 element at address: 0x200028269040 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:03:52.801 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:03:52.801 list of memzone associated elements. size: 607.928894 MiB 00:03:52.801 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:03:52.801 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:52.801 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:03:52.801 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:52.801 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:03:52.801 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3174004_0 00:03:52.801 element at address: 0x200000dff380 with size: 48.003052 MiB 00:03:52.801 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3174004_0 00:03:52.801 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:03:52.801 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3174004_0 00:03:52.801 element at address: 0x2000199be940 with size: 20.255554 MiB 00:03:52.801 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:52.801 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:03:52.801 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:52.801 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:03:52.801 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3174004_0 00:03:52.801 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:03:52.801 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3174004 00:03:52.801 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:52.801 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3174004 00:03:52.801 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:03:52.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:52.801 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:03:52.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:52.801 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:03:52.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:52.801 element at address: 0x200003efba40 with size: 1.008118 MiB 00:03:52.801 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:52.801 element at address: 0x200000cff180 with size: 1.000488 MiB 00:03:52.801 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3174004 00:03:52.801 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:03:52.801 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3174004 00:03:52.801 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:03:52.801 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3174004 00:03:52.801 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:03:52.801 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3174004 00:03:52.801 element at address: 0x20000087f740 with size: 0.500488 MiB 00:03:52.801 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3174004 00:03:52.801 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:03:52.801 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3174004 00:03:52.801 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:03:52.801 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:52.801 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:03:52.801 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:52.801 element at address: 0x20001987c540 with size: 0.250488 MiB 00:03:52.801 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:52.801 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:03:52.801 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3174004 00:03:52.801 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:03:52.801 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3174004 00:03:52.801 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:03:52.801 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:52.801 element at address: 0x200028269100 with size: 0.023743 MiB 00:03:52.801 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:52.801 element at address: 0x20000085b100 with size: 0.016113 MiB 00:03:52.801 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3174004 00:03:52.801 element at address: 0x20002826f240 with size: 0.002441 MiB 00:03:52.801 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:52.801 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:03:52.801 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3174004 00:03:52.801 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:03:52.801 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3174004 00:03:52.801 element at address: 0x20000085af00 with size: 0.000305 MiB 00:03:52.801 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3174004 00:03:52.801 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:03:52.801 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:52.801 11:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:52.802 11:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3174004 00:03:52.802 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3174004 ']' 00:03:52.802 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3174004 00:03:52.802 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:03:52.802 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:52.802 11:02:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3174004 00:03:53.063 11:02:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:53.063 11:02:59 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:53.063 11:02:59 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3174004' 00:03:53.063 killing process with pid 3174004 00:03:53.063 11:02:59 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3174004 00:03:53.063 11:02:59 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3174004 00:03:53.063 00:03:53.063 real 0m1.420s 00:03:53.063 user 0m1.514s 00:03:53.063 sys 0m0.400s 00:03:53.063 11:02:59 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.063 11:02:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:53.063 ************************************ 00:03:53.063 END TEST dpdk_mem_utility 00:03:53.063 ************************************ 00:03:53.324 11:02:59 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:53.324 11:02:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.324 11:02:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.325 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:03:53.325 ************************************ 00:03:53.325 START TEST event 00:03:53.325 ************************************ 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:53.325 * Looking for test storage... 00:03:53.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1711 -- # lcov --version 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:53.325 11:02:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.325 11:02:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.325 11:02:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.325 11:02:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.325 11:02:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.325 11:02:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.325 11:02:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.325 11:02:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.325 11:02:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.325 11:02:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.325 11:02:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.325 11:02:59 event -- scripts/common.sh@344 -- # case "$op" in 00:03:53.325 11:02:59 event -- scripts/common.sh@345 -- # : 1 00:03:53.325 11:02:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.325 11:02:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.325 11:02:59 event -- scripts/common.sh@365 -- # decimal 1 00:03:53.325 11:02:59 event -- scripts/common.sh@353 -- # local d=1 00:03:53.325 11:02:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.325 11:02:59 event -- scripts/common.sh@355 -- # echo 1 00:03:53.325 11:02:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.325 11:02:59 event -- scripts/common.sh@366 -- # decimal 2 00:03:53.325 11:02:59 event -- scripts/common.sh@353 -- # local d=2 00:03:53.325 11:02:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.325 11:02:59 event -- scripts/common.sh@355 -- # echo 2 00:03:53.325 11:02:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.325 11:02:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.325 11:02:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.325 11:02:59 event -- scripts/common.sh@368 -- # return 0 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:53.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.325 --rc genhtml_branch_coverage=1 00:03:53.325 --rc genhtml_function_coverage=1 00:03:53.325 --rc genhtml_legend=1 00:03:53.325 --rc geninfo_all_blocks=1 00:03:53.325 --rc geninfo_unexecuted_blocks=1 00:03:53.325 00:03:53.325 ' 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:53.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.325 --rc genhtml_branch_coverage=1 00:03:53.325 --rc genhtml_function_coverage=1 00:03:53.325 --rc genhtml_legend=1 00:03:53.325 --rc geninfo_all_blocks=1 00:03:53.325 --rc geninfo_unexecuted_blocks=1 00:03:53.325 00:03:53.325 ' 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:53.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.325 --rc genhtml_branch_coverage=1 00:03:53.325 --rc genhtml_function_coverage=1 00:03:53.325 --rc genhtml_legend=1 00:03:53.325 --rc geninfo_all_blocks=1 00:03:53.325 --rc geninfo_unexecuted_blocks=1 00:03:53.325 00:03:53.325 ' 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:53.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.325 --rc genhtml_branch_coverage=1 00:03:53.325 --rc genhtml_function_coverage=1 00:03:53.325 --rc genhtml_legend=1 00:03:53.325 --rc geninfo_all_blocks=1 00:03:53.325 --rc geninfo_unexecuted_blocks=1 00:03:53.325 00:03:53.325 ' 00:03:53.325 11:02:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:53.325 11:02:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:53.325 11:02:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:03:53.325 11:02:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.325 11:02:59 event -- common/autotest_common.sh@10 -- # set +x 00:03:53.586 ************************************ 00:03:53.586 START TEST event_perf 00:03:53.586 ************************************ 00:03:53.586 11:02:59 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:53.586 Running I/O for 1 seconds...[2024-12-06 11:02:59.520242] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:53.586 [2024-12-06 11:02:59.520340] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174403 ] 00:03:53.586 [2024-12-06 11:02:59.608738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:53.586 [2024-12-06 11:02:59.650010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:53.586 [2024-12-06 11:02:59.650124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:53.586 [2024-12-06 11:02:59.650282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.586 Running I/O for 1 seconds...[2024-12-06 11:02:59.650282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:54.529 00:03:54.529 lcore 0: 174631 00:03:54.529 lcore 1: 174630 00:03:54.529 lcore 2: 174627 00:03:54.529 lcore 3: 174630 00:03:54.529 done. 00:03:54.529 00:03:54.529 real 0m1.186s 00:03:54.529 user 0m4.109s 00:03:54.529 sys 0m0.074s 00:03:54.529 11:03:00 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.529 11:03:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:54.529 ************************************ 00:03:54.529 END TEST event_perf 00:03:54.529 ************************************ 00:03:54.790 11:03:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:54.790 11:03:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:54.790 11:03:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.790 11:03:00 event -- common/autotest_common.sh@10 -- # set +x 00:03:54.790 ************************************ 00:03:54.790 START TEST event_reactor 00:03:54.790 ************************************ 00:03:54.790 11:03:00 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:54.790 [2024-12-06 11:03:00.763497] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:54.790 [2024-12-06 11:03:00.763533] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174573 ] 00:03:54.790 [2024-12-06 11:03:00.831633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.790 [2024-12-06 11:03:00.866926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.170 test_start 00:03:56.170 oneshot 00:03:56.170 tick 100 00:03:56.170 tick 100 00:03:56.170 tick 250 00:03:56.170 tick 100 00:03:56.170 tick 100 00:03:56.170 tick 250 00:03:56.170 tick 100 00:03:56.170 tick 500 00:03:56.170 tick 100 00:03:56.170 tick 100 00:03:56.170 tick 250 00:03:56.170 tick 100 00:03:56.170 tick 100 00:03:56.170 test_end 00:03:56.170 00:03:56.170 real 0m1.142s 00:03:56.170 user 0m1.081s 00:03:56.170 sys 0m0.057s 00:03:56.170 11:03:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.170 11:03:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:03:56.170 ************************************ 00:03:56.170 END TEST event_reactor 00:03:56.170 ************************************ 00:03:56.170 11:03:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:56.170 11:03:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:56.170 11:03:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.170 11:03:01 event -- common/autotest_common.sh@10 -- # set +x 00:03:56.170 ************************************ 00:03:56.170 START TEST event_reactor_perf 00:03:56.170 ************************************ 00:03:56.170 11:03:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:56.170 [2024-12-06 11:03:01.994464] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:56.170 [2024-12-06 11:03:01.994570] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174900 ] 00:03:56.170 [2024-12-06 11:03:02.077588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.170 [2024-12-06 11:03:02.115193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.112 test_start 00:03:57.112 test_end 00:03:57.112 Performance: 370341 events per second 00:03:57.112 00:03:57.112 real 0m1.175s 00:03:57.112 user 0m1.099s 00:03:57.113 sys 0m0.072s 00:03:57.113 11:03:03 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.113 11:03:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:03:57.113 ************************************ 00:03:57.113 END TEST event_reactor_perf 00:03:57.113 ************************************ 00:03:57.113 11:03:03 event -- event/event.sh@49 -- # uname -s 00:03:57.113 11:03:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:57.113 11:03:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:57.113 11:03:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.113 11:03:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.113 11:03:03 event -- common/autotest_common.sh@10 -- # set +x 00:03:57.113 ************************************ 00:03:57.113 START TEST event_scheduler 00:03:57.113 ************************************ 00:03:57.113 11:03:03 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:57.375 * Looking for test storage... 00:03:57.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.375 11:03:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:57.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.375 --rc genhtml_branch_coverage=1 00:03:57.375 --rc genhtml_function_coverage=1 00:03:57.375 --rc genhtml_legend=1 00:03:57.375 --rc geninfo_all_blocks=1 00:03:57.375 --rc geninfo_unexecuted_blocks=1 00:03:57.375 00:03:57.375 ' 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:57.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.375 --rc genhtml_branch_coverage=1 00:03:57.375 --rc genhtml_function_coverage=1 00:03:57.375 --rc genhtml_legend=1 00:03:57.375 --rc geninfo_all_blocks=1 00:03:57.375 --rc geninfo_unexecuted_blocks=1 00:03:57.375 00:03:57.375 ' 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:57.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.375 --rc genhtml_branch_coverage=1 00:03:57.375 --rc genhtml_function_coverage=1 00:03:57.375 --rc genhtml_legend=1 00:03:57.375 --rc geninfo_all_blocks=1 00:03:57.375 --rc geninfo_unexecuted_blocks=1 00:03:57.375 00:03:57.375 ' 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:57.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.375 --rc genhtml_branch_coverage=1 00:03:57.375 --rc genhtml_function_coverage=1 00:03:57.375 --rc genhtml_legend=1 00:03:57.375 --rc geninfo_all_blocks=1 00:03:57.375 --rc geninfo_unexecuted_blocks=1 00:03:57.375 00:03:57.375 ' 00:03:57.375 11:03:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:57.375 11:03:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3175287 00:03:57.375 11:03:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.375 11:03:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:57.375 11:03:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3175287 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3175287 ']' 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:57.375 11:03:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:57.375 [2024-12-06 11:03:03.483344] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:03:57.375 [2024-12-06 11:03:03.483400] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175287 ] 00:03:57.636 [2024-12-06 11:03:03.548716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:57.636 [2024-12-06 11:03:03.580070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.636 [2024-12-06 11:03:03.580227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:57.636 [2024-12-06 11:03:03.580271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:57.636 [2024-12-06 11:03:03.580274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:58.209 11:03:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:58.209 11:03:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:03:58.209 11:03:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:58.209 11:03:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.209 11:03:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:58.209 [2024-12-06 11:03:04.286563] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:03:58.209 [2024-12-06 11:03:04.286577] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:03:58.209 [2024-12-06 11:03:04.286586] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:58.209 [2024-12-06 11:03:04.286591] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:58.209 [2024-12-06 11:03:04.286595] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:58.209 11:03:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.209 11:03:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:58.209 11:03:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.209 11:03:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:58.209 [2024-12-06 11:03:04.347912] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:58.209 11:03:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.209 11:03:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:58.209 11:03:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.209 11:03:04 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.209 11:03:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:58.470 ************************************ 00:03:58.470 START TEST scheduler_create_thread 00:03:58.470 ************************************ 00:03:58.470 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:03:58.470 11:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.471 2 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.471 3 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.471 4 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.471 5 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.471 6 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.471 7 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.471 8 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.471 9 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.471 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:59.043 10 00:03:59.043 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.043 11:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:03:59.043 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.043 11:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.428 11:03:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.428 11:03:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:00.428 11:03:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:00.428 11:03:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.428 11:03:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:01.001 11:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.001 11:03:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:01.001 11:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.001 11:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:01.944 11:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.944 11:03:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:01.944 11:03:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:01.944 11:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.944 11:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.517 11:03:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.517 00:04:02.517 real 0m4.223s 00:04:02.517 user 0m0.026s 00:04:02.517 sys 0m0.006s 00:04:02.517 11:03:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.517 11:03:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.517 ************************************ 00:04:02.517 END TEST scheduler_create_thread 00:04:02.517 ************************************ 00:04:02.517 11:03:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:02.517 11:03:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3175287 00:04:02.517 11:03:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3175287 ']' 00:04:02.517 11:03:08 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3175287 00:04:02.517 11:03:08 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:02.517 11:03:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.517 11:03:08 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175287 00:04:02.778 11:03:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:02.778 11:03:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:02.778 11:03:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175287' 00:04:02.778 killing process with pid 3175287 00:04:02.778 11:03:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3175287 00:04:02.778 11:03:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3175287 00:04:02.778 [2024-12-06 11:03:08.889088] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:03.039 00:04:03.039 real 0m5.824s 00:04:03.039 user 0m12.995s 00:04:03.039 sys 0m0.400s 00:04:03.039 11:03:09 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.039 11:03:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:03.039 ************************************ 00:04:03.039 END TEST event_scheduler 00:04:03.039 ************************************ 00:04:03.039 11:03:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:03.039 11:03:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:03.039 11:03:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.039 11:03:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.039 11:03:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:03.039 ************************************ 00:04:03.039 START TEST app_repeat 00:04:03.039 ************************************ 00:04:03.039 11:03:09 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3176615 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3176615' 00:04:03.039 Process app_repeat pid: 3176615 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:03.039 spdk_app_start Round 0 00:04:03.039 11:03:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3176615 /var/tmp/spdk-nbd.sock 00:04:03.039 11:03:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3176615 ']' 00:04:03.039 11:03:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:03.039 11:03:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.039 11:03:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:03.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:03.039 11:03:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.039 11:03:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:03.039 [2024-12-06 11:03:09.150246] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:03.039 [2024-12-06 11:03:09.150306] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3176615 ] 00:04:03.301 [2024-12-06 11:03:09.230572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:03.301 [2024-12-06 11:03:09.269059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.301 [2024-12-06 11:03:09.269151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.301 11:03:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.301 11:03:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:03.301 11:03:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:03.562 Malloc0 00:04:03.562 11:03:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:03.562 Malloc1 00:04:03.562 11:03:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:03.562 11:03:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:03.822 /dev/nbd0 00:04:03.822 11:03:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:03.822 11:03:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:03.822 1+0 records in 00:04:03.822 1+0 records out 00:04:03.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213678 s, 19.2 MB/s 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:03.822 11:03:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:03.823 11:03:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:03.823 11:03:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:03.823 11:03:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:03.823 11:03:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:03.823 11:03:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:04.084 /dev/nbd1 00:04:04.084 11:03:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:04.084 11:03:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:04.084 1+0 records in 00:04:04.084 1+0 records out 00:04:04.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306407 s, 13.4 MB/s 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:04.084 11:03:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:04.084 11:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:04.084 11:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:04.084 11:03:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:04.084 11:03:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.084 11:03:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:04.346 { 00:04:04.346 "nbd_device": "/dev/nbd0", 00:04:04.346 "bdev_name": "Malloc0" 00:04:04.346 }, 00:04:04.346 { 00:04:04.346 "nbd_device": "/dev/nbd1", 00:04:04.346 "bdev_name": "Malloc1" 00:04:04.346 } 00:04:04.346 ]' 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:04.346 { 00:04:04.346 "nbd_device": "/dev/nbd0", 00:04:04.346 "bdev_name": "Malloc0" 00:04:04.346 }, 00:04:04.346 { 00:04:04.346 "nbd_device": "/dev/nbd1", 00:04:04.346 "bdev_name": "Malloc1" 00:04:04.346 } 00:04:04.346 ]' 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:04.346 /dev/nbd1' 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:04.346 /dev/nbd1' 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:04.346 256+0 records in 00:04:04.346 256+0 records out 00:04:04.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126551 s, 82.9 MB/s 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:04.346 256+0 records in 00:04:04.346 256+0 records out 00:04:04.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218474 s, 48.0 MB/s 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:04.346 256+0 records in 00:04:04.346 256+0 records out 00:04:04.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177756 s, 59.0 MB/s 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:04.346 11:03:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:04.606 11:03:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:04.606 11:03:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:04.606 11:03:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:04.606 11:03:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:04.606 11:03:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:04.606 11:03:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:04.606 11:03:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:04.606 11:03:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:04.606 11:03:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:04.606 11:03:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:04.867 11:03:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:04.867 11:03:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:04.867 11:03:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:04.867 11:03:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:04.867 11:03:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:04.867 11:03:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:04.867 11:03:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:04.867 11:03:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:04.867 11:03:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:04.868 11:03:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.868 11:03:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:04.868 11:03:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:04.868 11:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:04.868 11:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:05.128 11:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:05.128 11:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:05.128 11:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:05.128 11:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:05.128 11:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:05.128 11:03:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:05.128 11:03:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:05.128 11:03:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:05.128 11:03:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:05.128 11:03:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:05.389 11:03:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:05.389 [2024-12-06 11:03:11.439705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:05.389 [2024-12-06 11:03:11.475834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.389 [2024-12-06 11:03:11.475837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.389 [2024-12-06 11:03:11.507606] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:05.389 [2024-12-06 11:03:11.507643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:08.690 11:03:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:08.690 11:03:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:08.690 spdk_app_start Round 1 00:04:08.690 11:03:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3176615 /var/tmp/spdk-nbd.sock 00:04:08.690 11:03:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3176615 ']' 00:04:08.690 11:03:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:08.690 11:03:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.690 11:03:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:08.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:08.690 11:03:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.690 11:03:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:08.690 11:03:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.690 11:03:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:08.690 11:03:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:08.690 Malloc0 00:04:08.690 11:03:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:08.690 Malloc1 00:04:08.690 11:03:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:08.690 11:03:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:08.950 /dev/nbd0 00:04:08.950 11:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:08.950 11:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:08.950 1+0 records in 00:04:08.950 1+0 records out 00:04:08.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244908 s, 16.7 MB/s 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:08.950 11:03:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:08.950 11:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:08.950 11:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:08.950 11:03:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:09.211 /dev/nbd1 00:04:09.211 11:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:09.211 11:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:09.211 11:03:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:09.211 11:03:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:09.211 11:03:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:09.211 11:03:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:09.211 11:03:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:09.211 11:03:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:09.211 11:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:09.211 11:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:09.211 11:03:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:09.211 1+0 records in 00:04:09.211 1+0 records out 00:04:09.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215865 s, 19.0 MB/s 00:04:09.211 11:03:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:09.211 11:03:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:09.212 11:03:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:09.212 11:03:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:09.212 11:03:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:09.212 11:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:09.212 11:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:09.212 11:03:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:09.212 11:03:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.212 11:03:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:09.473 { 00:04:09.473 "nbd_device": "/dev/nbd0", 00:04:09.473 "bdev_name": "Malloc0" 00:04:09.473 }, 00:04:09.473 { 00:04:09.473 "nbd_device": "/dev/nbd1", 00:04:09.473 "bdev_name": "Malloc1" 00:04:09.473 } 00:04:09.473 ]' 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:09.473 { 00:04:09.473 "nbd_device": "/dev/nbd0", 00:04:09.473 "bdev_name": "Malloc0" 00:04:09.473 }, 00:04:09.473 { 00:04:09.473 "nbd_device": "/dev/nbd1", 00:04:09.473 "bdev_name": "Malloc1" 00:04:09.473 } 00:04:09.473 ]' 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:09.473 /dev/nbd1' 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:09.473 /dev/nbd1' 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:09.473 256+0 records in 00:04:09.473 256+0 records out 00:04:09.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121613 s, 86.2 MB/s 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:09.473 256+0 records in 00:04:09.473 256+0 records out 00:04:09.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179213 s, 58.5 MB/s 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:09.473 256+0 records in 00:04:09.473 256+0 records out 00:04:09.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178079 s, 58.9 MB/s 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:09.473 11:03:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:09.733 11:03:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:09.733 11:03:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:09.733 11:03:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:09.733 11:03:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:09.733 11:03:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:09.734 11:03:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:09.734 11:03:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:09.734 11:03:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:09.734 11:03:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:09.734 11:03:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:09.994 11:03:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:09.994 11:03:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:09.994 11:03:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:09.994 11:03:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:09.994 11:03:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:09.994 11:03:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:09.994 11:03:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:09.994 11:03:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:09.994 11:03:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:09.994 11:03:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.994 11:03:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:10.253 11:03:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:10.254 11:03:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:10.254 11:03:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:10.514 [2024-12-06 11:03:16.520182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:10.514 [2024-12-06 11:03:16.556697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.514 [2024-12-06 11:03:16.556700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.514 [2024-12-06 11:03:16.589310] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:10.514 [2024-12-06 11:03:16.589346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:13.809 11:03:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:13.809 11:03:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:13.809 spdk_app_start Round 2 00:04:13.809 11:03:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3176615 /var/tmp/spdk-nbd.sock 00:04:13.809 11:03:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3176615 ']' 00:04:13.809 11:03:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:13.809 11:03:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.809 11:03:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:13.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:13.809 11:03:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.809 11:03:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:13.809 11:03:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.809 11:03:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:13.809 11:03:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:13.809 Malloc0 00:04:13.809 11:03:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:13.809 Malloc1 00:04:13.809 11:03:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:13.809 11:03:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.809 11:03:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.809 11:03:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:13.809 11:03:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.809 11:03:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:13.809 11:03:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:13.809 11:03:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.809 11:03:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.809 11:03:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:13.809 11:03:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.810 11:03:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:13.810 11:03:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:13.810 11:03:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:13.810 11:03:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.810 11:03:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:14.070 /dev/nbd0 00:04:14.070 11:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:14.070 11:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:14.070 1+0 records in 00:04:14.070 1+0 records out 00:04:14.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027673 s, 14.8 MB/s 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:14.070 11:03:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:14.070 11:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:14.070 11:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.070 11:03:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:14.330 /dev/nbd1 00:04:14.330 11:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:14.330 11:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:14.330 1+0 records in 00:04:14.330 1+0 records out 00:04:14.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167163 s, 24.5 MB/s 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:14.330 11:03:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:14.330 11:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:14.330 11:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.330 11:03:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:14.330 11:03:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.330 11:03:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:14.590 { 00:04:14.590 "nbd_device": "/dev/nbd0", 00:04:14.590 "bdev_name": "Malloc0" 00:04:14.590 }, 00:04:14.590 { 00:04:14.590 "nbd_device": "/dev/nbd1", 00:04:14.590 "bdev_name": "Malloc1" 00:04:14.590 } 00:04:14.590 ]' 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:14.590 { 00:04:14.590 "nbd_device": "/dev/nbd0", 00:04:14.590 "bdev_name": "Malloc0" 00:04:14.590 }, 00:04:14.590 { 00:04:14.590 "nbd_device": "/dev/nbd1", 00:04:14.590 "bdev_name": "Malloc1" 00:04:14.590 } 00:04:14.590 ]' 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:14.590 /dev/nbd1' 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:14.590 /dev/nbd1' 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:14.590 256+0 records in 00:04:14.590 256+0 records out 00:04:14.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117267 s, 89.4 MB/s 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:14.590 256+0 records in 00:04:14.590 256+0 records out 00:04:14.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168893 s, 62.1 MB/s 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:14.590 256+0 records in 00:04:14.590 256+0 records out 00:04:14.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191628 s, 54.7 MB/s 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:14.590 11:03:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:14.851 11:03:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:14.851 11:03:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:14.851 11:03:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:14.851 11:03:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.851 11:03:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.851 11:03:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:14.851 11:03:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:14.851 11:03:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.851 11:03:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:14.851 11:03:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:15.112 11:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.372 11:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:15.372 11:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:15.372 11:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.372 11:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:15.372 11:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:15.372 11:03:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:15.372 11:03:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:15.372 11:03:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:15.372 11:03:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:15.372 11:03:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:15.372 11:03:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:15.632 [2024-12-06 11:03:21.587354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.632 [2024-12-06 11:03:21.623353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.632 [2024-12-06 11:03:21.623355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.632 [2024-12-06 11:03:21.655252] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:15.632 [2024-12-06 11:03:21.655287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:18.933 11:03:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3176615 /var/tmp/spdk-nbd.sock 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3176615 ']' 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:18.933 11:03:24 event.app_repeat -- event/event.sh@39 -- # killprocess 3176615 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3176615 ']' 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3176615 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3176615 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3176615' 00:04:18.933 killing process with pid 3176615 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3176615 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3176615 00:04:18.933 spdk_app_start is called in Round 0. 00:04:18.933 Shutdown signal received, stop current app iteration 00:04:18.933 Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 reinitialization... 00:04:18.933 spdk_app_start is called in Round 1. 00:04:18.933 Shutdown signal received, stop current app iteration 00:04:18.933 Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 reinitialization... 00:04:18.933 spdk_app_start is called in Round 2. 00:04:18.933 Shutdown signal received, stop current app iteration 00:04:18.933 Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 reinitialization... 00:04:18.933 spdk_app_start is called in Round 3. 00:04:18.933 Shutdown signal received, stop current app iteration 00:04:18.933 11:03:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:18.933 11:03:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:18.933 00:04:18.933 real 0m15.689s 00:04:18.933 user 0m34.204s 00:04:18.933 sys 0m2.298s 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.933 11:03:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.933 ************************************ 00:04:18.933 END TEST app_repeat 00:04:18.933 ************************************ 00:04:18.933 11:03:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:18.933 11:03:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:18.933 11:03:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.933 11:03:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.933 11:03:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.933 ************************************ 00:04:18.933 START TEST cpu_locks 00:04:18.933 ************************************ 00:04:18.933 11:03:24 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:18.933 * Looking for test storage... 00:04:18.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:18.933 11:03:24 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:18.933 11:03:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:18.933 11:03:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:18.933 11:03:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.933 11:03:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:18.933 11:03:25 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.933 11:03:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.933 --rc genhtml_branch_coverage=1 00:04:18.933 --rc genhtml_function_coverage=1 00:04:18.933 --rc genhtml_legend=1 00:04:18.933 --rc geninfo_all_blocks=1 00:04:18.933 --rc geninfo_unexecuted_blocks=1 00:04:18.933 00:04:18.933 ' 00:04:18.933 11:03:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.933 --rc genhtml_branch_coverage=1 00:04:18.933 --rc genhtml_function_coverage=1 00:04:18.933 --rc genhtml_legend=1 00:04:18.933 --rc geninfo_all_blocks=1 00:04:18.933 --rc geninfo_unexecuted_blocks=1 00:04:18.933 00:04:18.933 ' 00:04:18.933 11:03:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.933 --rc genhtml_branch_coverage=1 00:04:18.933 --rc genhtml_function_coverage=1 00:04:18.933 --rc genhtml_legend=1 00:04:18.933 --rc geninfo_all_blocks=1 00:04:18.933 --rc geninfo_unexecuted_blocks=1 00:04:18.933 00:04:18.933 ' 00:04:18.933 11:03:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.933 --rc genhtml_branch_coverage=1 00:04:18.933 --rc genhtml_function_coverage=1 00:04:18.933 --rc genhtml_legend=1 00:04:18.933 --rc geninfo_all_blocks=1 00:04:18.933 --rc geninfo_unexecuted_blocks=1 00:04:18.933 00:04:18.933 ' 00:04:18.933 11:03:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:18.933 11:03:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:18.933 11:03:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:18.933 11:03:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:18.933 11:03:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.933 11:03:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.933 11:03:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:19.194 ************************************ 00:04:19.194 START TEST default_locks 00:04:19.194 ************************************ 00:04:19.194 11:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:19.194 11:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3180383 00:04:19.194 11:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3180383 00:04:19.194 11:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:19.194 11:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3180383 ']' 00:04:19.194 11:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.194 11:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.194 11:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.194 11:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.194 11:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:19.194 [2024-12-06 11:03:25.181139] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:19.194 [2024-12-06 11:03:25.181188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3180383 ] 00:04:19.194 [2024-12-06 11:03:25.259329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.194 [2024-12-06 11:03:25.295189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.135 11:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.135 11:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:20.135 11:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3180383 00:04:20.135 11:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3180383 00:04:20.135 11:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:20.396 lslocks: write error 00:04:20.396 11:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3180383 00:04:20.396 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3180383 ']' 00:04:20.396 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3180383 00:04:20.396 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:20.396 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.396 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3180383 00:04:20.396 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.396 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.396 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3180383' 00:04:20.396 killing process with pid 3180383 00:04:20.396 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3180383 00:04:20.396 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3180383 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3180383 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3180383 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3180383 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3180383 ']' 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3180383) - No such process 00:04:20.657 ERROR: process (pid: 3180383) is no longer running 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:20.657 00:04:20.657 real 0m1.648s 00:04:20.657 user 0m1.762s 00:04:20.657 sys 0m0.572s 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.657 11:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.657 ************************************ 00:04:20.657 END TEST default_locks 00:04:20.657 ************************************ 00:04:20.657 11:03:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:20.657 11:03:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.657 11:03:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.657 11:03:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.918 ************************************ 00:04:20.918 START TEST default_locks_via_rpc 00:04:20.918 ************************************ 00:04:20.918 11:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:20.918 11:03:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3180755 00:04:20.918 11:03:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3180755 00:04:20.918 11:03:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.918 11:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3180755 ']' 00:04:20.918 11:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.918 11:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.918 11:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.918 11:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.918 11:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.918 [2024-12-06 11:03:26.902497] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:20.918 [2024-12-06 11:03:26.902549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3180755 ] 00:04:20.918 [2024-12-06 11:03:26.981201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.918 [2024-12-06 11:03:27.019812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3180755 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3180755 00:04:21.935 11:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:22.244 11:03:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3180755 00:04:22.244 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3180755 ']' 00:04:22.244 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3180755 00:04:22.244 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.244 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.244 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3180755 00:04:22.244 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.244 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.244 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3180755' 00:04:22.244 killing process with pid 3180755 00:04:22.244 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3180755 00:04:22.244 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3180755 00:04:22.519 00:04:22.519 real 0m1.752s 00:04:22.519 user 0m1.888s 00:04:22.519 sys 0m0.586s 00:04:22.519 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.519 11:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.519 ************************************ 00:04:22.519 END TEST default_locks_via_rpc 00:04:22.519 ************************************ 00:04:22.519 11:03:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:22.519 11:03:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.519 11:03:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.519 11:03:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:22.519 ************************************ 00:04:22.519 START TEST non_locking_app_on_locked_coremask 00:04:22.519 ************************************ 00:04:22.519 11:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:22.519 11:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3181130 00:04:22.519 11:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3181130 /var/tmp/spdk.sock 00:04:22.519 11:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.519 11:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3181130 ']' 00:04:22.519 11:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.519 11:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.519 11:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.519 11:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.519 11:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:22.781 [2024-12-06 11:03:28.739481] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:22.781 [2024-12-06 11:03:28.739532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181130 ] 00:04:22.781 [2024-12-06 11:03:28.816889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.781 [2024-12-06 11:03:28.853114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.355 11:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.355 11:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:23.355 11:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3181302 00:04:23.355 11:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3181302 /var/tmp/spdk2.sock 00:04:23.355 11:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3181302 ']' 00:04:23.355 11:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:23.355 11:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:23.355 11:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.355 11:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:23.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:23.355 11:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.355 11:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:23.617 [2024-12-06 11:03:29.571396] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:23.617 [2024-12-06 11:03:29.571453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181302 ] 00:04:23.617 [2024-12-06 11:03:29.694136] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:23.617 [2024-12-06 11:03:29.694166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.617 [2024-12-06 11:03:29.766891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.558 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.558 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:24.558 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3181130 00:04:24.558 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3181130 00:04:24.558 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:24.819 lslocks: write error 00:04:24.819 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3181130 00:04:24.819 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3181130 ']' 00:04:24.819 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3181130 00:04:24.819 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:24.819 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.819 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181130 00:04:24.819 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.819 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.819 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181130' 00:04:24.819 killing process with pid 3181130 00:04:24.819 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3181130 00:04:24.819 11:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3181130 00:04:25.391 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3181302 00:04:25.391 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3181302 ']' 00:04:25.391 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3181302 00:04:25.391 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:25.391 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.391 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181302 00:04:25.391 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.391 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.391 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181302' 00:04:25.391 killing process with pid 3181302 00:04:25.391 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3181302 00:04:25.391 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3181302 00:04:25.652 00:04:25.652 real 0m2.984s 00:04:25.652 user 0m3.284s 00:04:25.652 sys 0m0.931s 00:04:25.652 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.652 11:03:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:25.652 ************************************ 00:04:25.652 END TEST non_locking_app_on_locked_coremask 00:04:25.652 ************************************ 00:04:25.652 11:03:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:25.652 11:03:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.652 11:03:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.652 11:03:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:25.652 ************************************ 00:04:25.652 START TEST locking_app_on_unlocked_coremask 00:04:25.652 ************************************ 00:04:25.652 11:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:25.652 11:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3181841 00:04:25.652 11:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3181841 /var/tmp/spdk.sock 00:04:25.652 11:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:25.652 11:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3181841 ']' 00:04:25.652 11:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.652 11:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.652 11:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.652 11:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.652 11:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:25.652 [2024-12-06 11:03:31.799106] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:25.652 [2024-12-06 11:03:31.799162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181841 ] 00:04:25.913 [2024-12-06 11:03:31.879586] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:25.913 [2024-12-06 11:03:31.879617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.913 [2024-12-06 11:03:31.920316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.485 11:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.485 11:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:26.485 11:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:26.485 11:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3181867 00:04:26.485 11:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3181867 /var/tmp/spdk2.sock 00:04:26.485 11:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3181867 ']' 00:04:26.485 11:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:26.485 11:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.485 11:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:26.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:26.485 11:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.485 11:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:26.485 [2024-12-06 11:03:32.613477] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:26.485 [2024-12-06 11:03:32.613531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181867 ] 00:04:26.746 [2024-12-06 11:03:32.738625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.746 [2024-12-06 11:03:32.812174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.317 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.317 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:27.317 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3181867 00:04:27.317 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:27.317 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3181867 00:04:27.887 lslocks: write error 00:04:27.887 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3181841 00:04:27.887 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3181841 ']' 00:04:27.887 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3181841 00:04:27.887 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:27.887 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.887 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181841 00:04:27.887 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.887 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.887 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181841' 00:04:27.887 killing process with pid 3181841 00:04:27.887 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3181841 00:04:27.887 11:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3181841 00:04:28.148 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3181867 00:04:28.148 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3181867 ']' 00:04:28.148 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3181867 00:04:28.148 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:28.148 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.148 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181867 00:04:28.408 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.408 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.408 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181867' 00:04:28.408 killing process with pid 3181867 00:04:28.408 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3181867 00:04:28.408 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3181867 00:04:28.408 00:04:28.408 real 0m2.822s 00:04:28.408 user 0m3.119s 00:04:28.408 sys 0m0.854s 00:04:28.408 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.408 11:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:28.408 ************************************ 00:04:28.408 END TEST locking_app_on_unlocked_coremask 00:04:28.408 ************************************ 00:04:28.669 11:03:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:28.669 11:03:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.669 11:03:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.669 11:03:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.669 ************************************ 00:04:28.669 START TEST locking_app_on_locked_coremask 00:04:28.669 ************************************ 00:04:28.669 11:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:28.669 11:03:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3182410 00:04:28.669 11:03:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3182410 /var/tmp/spdk.sock 00:04:28.669 11:03:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.669 11:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3182410 ']' 00:04:28.670 11:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.670 11:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.670 11:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.670 11:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.670 11:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:28.670 [2024-12-06 11:03:34.705068] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:28.670 [2024-12-06 11:03:34.705128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3182410 ] 00:04:28.670 [2024-12-06 11:03:34.787204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.670 [2024-12-06 11:03:34.829107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.625 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.625 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:29.625 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:29.625 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3182563 00:04:29.625 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3182563 /var/tmp/spdk2.sock 00:04:29.625 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:29.625 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3182563 /var/tmp/spdk2.sock 00:04:29.625 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:29.625 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.626 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:29.626 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.626 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3182563 /var/tmp/spdk2.sock 00:04:29.626 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3182563 ']' 00:04:29.626 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:29.626 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.626 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:29.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:29.626 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.626 11:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.626 [2024-12-06 11:03:35.522593] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:29.626 [2024-12-06 11:03:35.522647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3182563 ] 00:04:29.626 [2024-12-06 11:03:35.644285] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3182410 has claimed it. 00:04:29.626 [2024-12-06 11:03:35.644328] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:30.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3182563) - No such process 00:04:30.198 ERROR: process (pid: 3182563) is no longer running 00:04:30.198 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.198 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:30.198 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:30.198 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:30.198 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:30.198 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:30.198 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3182410 00:04:30.198 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3182410 00:04:30.198 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:30.770 lslocks: write error 00:04:30.770 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3182410 00:04:30.770 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3182410 ']' 00:04:30.770 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3182410 00:04:30.770 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:30.770 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.770 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3182410 00:04:30.770 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.770 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.770 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3182410' 00:04:30.770 killing process with pid 3182410 00:04:30.770 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3182410 00:04:30.770 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3182410 00:04:31.032 00:04:31.032 real 0m2.313s 00:04:31.032 user 0m2.586s 00:04:31.032 sys 0m0.641s 00:04:31.032 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.032 11:03:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.032 ************************************ 00:04:31.032 END TEST locking_app_on_locked_coremask 00:04:31.032 ************************************ 00:04:31.032 11:03:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:31.032 11:03:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.032 11:03:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.032 11:03:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.032 ************************************ 00:04:31.032 START TEST locking_overlapped_coremask 00:04:31.032 ************************************ 00:04:31.032 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:31.032 11:03:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3182924 00:04:31.032 11:03:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3182924 /var/tmp/spdk.sock 00:04:31.032 11:03:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:31.032 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3182924 ']' 00:04:31.032 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.032 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.032 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.032 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.032 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.032 [2024-12-06 11:03:37.081623] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:31.032 [2024-12-06 11:03:37.081674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3182924 ] 00:04:31.032 [2024-12-06 11:03:37.160648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:31.293 [2024-12-06 11:03:37.200791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.293 [2024-12-06 11:03:37.200910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:31.293 [2024-12-06 11:03:37.201128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3183069 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3183069 /var/tmp/spdk2.sock 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3183069 /var/tmp/spdk2.sock 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3183069 /var/tmp/spdk2.sock 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3183069 ']' 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.866 11:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.866 [2024-12-06 11:03:37.938491] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:31.866 [2024-12-06 11:03:37.938545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3183069 ] 00:04:31.866 [2024-12-06 11:03:38.030594] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3182924 has claimed it. 00:04:31.866 [2024-12-06 11:03:38.030623] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:32.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3183069) - No such process 00:04:32.438 ERROR: process (pid: 3183069) is no longer running 00:04:32.438 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.438 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3182924 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3182924 ']' 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3182924 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.439 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3182924 00:04:32.699 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.699 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.699 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3182924' 00:04:32.699 killing process with pid 3182924 00:04:32.699 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3182924 00:04:32.699 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3182924 00:04:32.699 00:04:32.699 real 0m1.808s 00:04:32.699 user 0m5.256s 00:04:32.699 sys 0m0.373s 00:04:32.699 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.699 11:03:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.699 ************************************ 00:04:32.699 END TEST locking_overlapped_coremask 00:04:32.699 ************************************ 00:04:32.959 11:03:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:32.959 11:03:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.959 11:03:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.959 11:03:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.959 ************************************ 00:04:32.959 START TEST locking_overlapped_coremask_via_rpc 00:04:32.959 ************************************ 00:04:32.960 11:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:32.960 11:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3183298 00:04:32.960 11:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3183298 /var/tmp/spdk.sock 00:04:32.960 11:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:32.960 11:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3183298 ']' 00:04:32.960 11:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.960 11:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.960 11:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.960 11:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.960 11:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.960 [2024-12-06 11:03:38.964945] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:32.960 [2024-12-06 11:03:38.964998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3183298 ] 00:04:32.960 [2024-12-06 11:03:39.042707] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:32.960 [2024-12-06 11:03:39.042734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:32.960 [2024-12-06 11:03:39.081544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.960 [2024-12-06 11:03:39.081663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.960 [2024-12-06 11:03:39.081666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.901 11:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.901 11:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.901 11:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:33.901 11:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3183531 00:04:33.901 11:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3183531 /var/tmp/spdk2.sock 00:04:33.901 11:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3183531 ']' 00:04:33.901 11:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:33.901 11:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.901 11:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:33.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:33.901 11:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.901 11:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.901 [2024-12-06 11:03:39.801758] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:33.901 [2024-12-06 11:03:39.801815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3183531 ] 00:04:33.901 [2024-12-06 11:03:39.900651] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:33.901 [2024-12-06 11:03:39.900673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:33.901 [2024-12-06 11:03:39.964125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.901 [2024-12-06 11:03:39.964290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:33.901 [2024-12-06 11:03:39.964293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.473 [2024-12-06 11:03:40.599923] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3183298 has claimed it. 00:04:34.473 request: 00:04:34.473 { 00:04:34.473 "method": "framework_enable_cpumask_locks", 00:04:34.473 "req_id": 1 00:04:34.473 } 00:04:34.473 Got JSON-RPC error response 00:04:34.473 response: 00:04:34.473 { 00:04:34.473 "code": -32603, 00:04:34.473 "message": "Failed to claim CPU core: 2" 00:04:34.473 } 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:34.473 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:34.474 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:34.474 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:34.474 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3183298 /var/tmp/spdk.sock 00:04:34.474 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3183298 ']' 00:04:34.474 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.474 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.474 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.474 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.474 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.735 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.735 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.735 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3183531 /var/tmp/spdk2.sock 00:04:34.735 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3183531 ']' 00:04:34.735 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.735 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.735 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.735 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.735 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.996 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.996 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.996 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:34.996 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:34.996 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:34.996 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:34.996 00:04:34.996 real 0m2.075s 00:04:34.996 user 0m0.853s 00:04:34.996 sys 0m0.135s 00:04:34.996 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.996 11:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.996 ************************************ 00:04:34.996 END TEST locking_overlapped_coremask_via_rpc 00:04:34.996 ************************************ 00:04:34.996 11:03:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:34.996 11:03:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3183298 ]] 00:04:34.996 11:03:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3183298 00:04:34.996 11:03:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3183298 ']' 00:04:34.996 11:03:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3183298 00:04:34.996 11:03:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:34.996 11:03:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.996 11:03:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3183298 00:04:34.996 11:03:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.996 11:03:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.996 11:03:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3183298' 00:04:34.996 killing process with pid 3183298 00:04:34.996 11:03:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3183298 00:04:34.996 11:03:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3183298 00:04:35.258 11:03:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3183531 ]] 00:04:35.258 11:03:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3183531 00:04:35.258 11:03:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3183531 ']' 00:04:35.258 11:03:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3183531 00:04:35.258 11:03:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:35.258 11:03:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.258 11:03:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3183531 00:04:35.258 11:03:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:35.258 11:03:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:35.258 11:03:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3183531' 00:04:35.258 killing process with pid 3183531 00:04:35.258 11:03:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3183531 00:04:35.258 11:03:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3183531 00:04:35.519 11:03:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:35.519 11:03:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:35.519 11:03:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3183298 ]] 00:04:35.519 11:03:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3183298 00:04:35.519 11:03:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3183298 ']' 00:04:35.519 11:03:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3183298 00:04:35.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3183298) - No such process 00:04:35.519 11:03:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3183298 is not found' 00:04:35.519 Process with pid 3183298 is not found 00:04:35.519 11:03:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3183531 ]] 00:04:35.519 11:03:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3183531 00:04:35.519 11:03:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3183531 ']' 00:04:35.519 11:03:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3183531 00:04:35.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3183531) - No such process 00:04:35.519 11:03:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3183531 is not found' 00:04:35.519 Process with pid 3183531 is not found 00:04:35.519 11:03:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:35.519 00:04:35.519 real 0m16.686s 00:04:35.519 user 0m28.911s 00:04:35.519 sys 0m5.056s 00:04:35.519 11:03:41 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.519 11:03:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.519 ************************************ 00:04:35.519 END TEST cpu_locks 00:04:35.519 ************************************ 00:04:35.519 00:04:35.519 real 0m42.320s 00:04:35.519 user 1m22.670s 00:04:35.519 sys 0m8.334s 00:04:35.519 11:03:41 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.519 11:03:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.519 ************************************ 00:04:35.519 END TEST event 00:04:35.519 ************************************ 00:04:35.519 11:03:41 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:35.519 11:03:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.519 11:03:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.519 11:03:41 -- common/autotest_common.sh@10 -- # set +x 00:04:35.519 ************************************ 00:04:35.519 START TEST thread 00:04:35.519 ************************************ 00:04:35.519 11:03:41 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:35.780 * Looking for test storage... 00:04:35.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.780 11:03:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.780 11:03:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.780 11:03:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.780 11:03:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.780 11:03:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.780 11:03:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.780 11:03:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.780 11:03:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.780 11:03:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.780 11:03:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.780 11:03:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.780 11:03:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:35.780 11:03:41 thread -- scripts/common.sh@345 -- # : 1 00:04:35.780 11:03:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.780 11:03:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.780 11:03:41 thread -- scripts/common.sh@365 -- # decimal 1 00:04:35.780 11:03:41 thread -- scripts/common.sh@353 -- # local d=1 00:04:35.780 11:03:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.780 11:03:41 thread -- scripts/common.sh@355 -- # echo 1 00:04:35.780 11:03:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.780 11:03:41 thread -- scripts/common.sh@366 -- # decimal 2 00:04:35.780 11:03:41 thread -- scripts/common.sh@353 -- # local d=2 00:04:35.780 11:03:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.780 11:03:41 thread -- scripts/common.sh@355 -- # echo 2 00:04:35.780 11:03:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.780 11:03:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.780 11:03:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.780 11:03:41 thread -- scripts/common.sh@368 -- # return 0 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.780 --rc genhtml_branch_coverage=1 00:04:35.780 --rc genhtml_function_coverage=1 00:04:35.780 --rc genhtml_legend=1 00:04:35.780 --rc geninfo_all_blocks=1 00:04:35.780 --rc geninfo_unexecuted_blocks=1 00:04:35.780 00:04:35.780 ' 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.780 --rc genhtml_branch_coverage=1 00:04:35.780 --rc genhtml_function_coverage=1 00:04:35.780 --rc genhtml_legend=1 00:04:35.780 --rc geninfo_all_blocks=1 00:04:35.780 --rc geninfo_unexecuted_blocks=1 00:04:35.780 00:04:35.780 ' 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.780 --rc genhtml_branch_coverage=1 00:04:35.780 --rc genhtml_function_coverage=1 00:04:35.780 --rc genhtml_legend=1 00:04:35.780 --rc geninfo_all_blocks=1 00:04:35.780 --rc geninfo_unexecuted_blocks=1 00:04:35.780 00:04:35.780 ' 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.780 --rc genhtml_branch_coverage=1 00:04:35.780 --rc genhtml_function_coverage=1 00:04:35.780 --rc genhtml_legend=1 00:04:35.780 --rc geninfo_all_blocks=1 00:04:35.780 --rc geninfo_unexecuted_blocks=1 00:04:35.780 00:04:35.780 ' 00:04:35.780 11:03:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.780 11:03:41 thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.780 ************************************ 00:04:35.780 START TEST thread_poller_perf 00:04:35.780 ************************************ 00:04:35.780 11:03:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:35.780 [2024-12-06 11:03:41.933194] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:35.780 [2024-12-06 11:03:41.933305] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3184082 ] 00:04:36.040 [2024-12-06 11:03:42.018077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.040 [2024-12-06 11:03:42.059294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.041 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:36.982 [2024-12-06T10:03:43.149Z] ====================================== 00:04:36.982 [2024-12-06T10:03:43.149Z] busy:2406717092 (cyc) 00:04:36.982 [2024-12-06T10:03:43.149Z] total_run_count: 287000 00:04:36.982 [2024-12-06T10:03:43.149Z] tsc_hz: 2400000000 (cyc) 00:04:36.982 [2024-12-06T10:03:43.149Z] ====================================== 00:04:36.982 [2024-12-06T10:03:43.149Z] poller_cost: 8385 (cyc), 3493 (nsec) 00:04:36.982 00:04:36.982 real 0m1.189s 00:04:36.982 user 0m1.113s 00:04:36.982 sys 0m0.072s 00:04:36.982 11:03:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.982 11:03:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.982 ************************************ 00:04:36.982 END TEST thread_poller_perf 00:04:36.982 ************************************ 00:04:36.982 11:03:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:36.982 11:03:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:36.982 11:03:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.982 11:03:43 thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.243 ************************************ 00:04:37.243 START TEST thread_poller_perf 00:04:37.243 ************************************ 00:04:37.243 11:03:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:37.243 [2024-12-06 11:03:43.196357] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:37.243 [2024-12-06 11:03:43.196462] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3184364 ] 00:04:37.243 [2024-12-06 11:03:43.279545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.243 [2024-12-06 11:03:43.315971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.243 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:38.183 [2024-12-06T10:03:44.350Z] ====================================== 00:04:38.183 [2024-12-06T10:03:44.350Z] busy:2401852420 (cyc) 00:04:38.183 [2024-12-06T10:03:44.350Z] total_run_count: 3502000 00:04:38.183 [2024-12-06T10:03:44.350Z] tsc_hz: 2400000000 (cyc) 00:04:38.183 [2024-12-06T10:03:44.350Z] ====================================== 00:04:38.183 [2024-12-06T10:03:44.350Z] poller_cost: 685 (cyc), 285 (nsec) 00:04:38.183 00:04:38.183 real 0m1.174s 00:04:38.183 user 0m1.100s 00:04:38.183 sys 0m0.071s 00:04:38.183 11:03:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.183 11:03:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:38.183 ************************************ 00:04:38.183 END TEST thread_poller_perf 00:04:38.183 ************************************ 00:04:38.444 11:03:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:38.444 00:04:38.444 real 0m2.717s 00:04:38.444 user 0m2.388s 00:04:38.444 sys 0m0.342s 00:04:38.444 11:03:44 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.444 11:03:44 thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.444 ************************************ 00:04:38.444 END TEST thread 00:04:38.444 ************************************ 00:04:38.444 11:03:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:38.444 11:03:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:38.444 11:03:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.444 11:03:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.444 11:03:44 -- common/autotest_common.sh@10 -- # set +x 00:04:38.444 ************************************ 00:04:38.444 START TEST app_cmdline 00:04:38.444 ************************************ 00:04:38.444 11:03:44 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:38.444 * Looking for test storage... 00:04:38.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:38.444 11:03:44 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.444 11:03:44 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.444 11:03:44 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.705 11:03:44 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.705 11:03:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.705 11:03:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.705 11:03:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.705 11:03:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.705 11:03:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.706 11:03:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:38.706 11:03:44 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.706 11:03:44 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.706 --rc genhtml_branch_coverage=1 00:04:38.706 --rc genhtml_function_coverage=1 00:04:38.706 --rc genhtml_legend=1 00:04:38.706 --rc geninfo_all_blocks=1 00:04:38.706 --rc geninfo_unexecuted_blocks=1 00:04:38.706 00:04:38.706 ' 00:04:38.706 11:03:44 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.706 --rc genhtml_branch_coverage=1 00:04:38.706 --rc genhtml_function_coverage=1 00:04:38.706 --rc genhtml_legend=1 00:04:38.706 --rc geninfo_all_blocks=1 00:04:38.706 --rc geninfo_unexecuted_blocks=1 00:04:38.706 00:04:38.706 ' 00:04:38.706 11:03:44 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.706 --rc genhtml_branch_coverage=1 00:04:38.706 --rc genhtml_function_coverage=1 00:04:38.706 --rc genhtml_legend=1 00:04:38.706 --rc geninfo_all_blocks=1 00:04:38.706 --rc geninfo_unexecuted_blocks=1 00:04:38.706 00:04:38.706 ' 00:04:38.706 11:03:44 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.706 --rc genhtml_branch_coverage=1 00:04:38.706 --rc genhtml_function_coverage=1 00:04:38.706 --rc genhtml_legend=1 00:04:38.706 --rc geninfo_all_blocks=1 00:04:38.706 --rc geninfo_unexecuted_blocks=1 00:04:38.706 00:04:38.706 ' 00:04:38.706 11:03:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:38.706 11:03:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3184664 00:04:38.706 11:03:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3184664 00:04:38.706 11:03:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3184664 ']' 00:04:38.706 11:03:44 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:38.706 11:03:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.706 11:03:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.706 11:03:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.706 11:03:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.706 11:03:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:38.706 [2024-12-06 11:03:44.731182] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:38.706 [2024-12-06 11:03:44.731256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3184664 ] 00:04:38.706 [2024-12-06 11:03:44.817625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.706 [2024-12-06 11:03:44.859546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:39.647 11:03:45 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:39.647 { 00:04:39.647 "version": "SPDK v25.01-pre git sha1 500d76084", 00:04:39.647 "fields": { 00:04:39.647 "major": 25, 00:04:39.647 "minor": 1, 00:04:39.647 "patch": 0, 00:04:39.647 "suffix": "-pre", 00:04:39.647 "commit": "500d76084" 00:04:39.647 } 00:04:39.647 } 00:04:39.647 11:03:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:39.647 11:03:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:39.647 11:03:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:39.647 11:03:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:39.647 11:03:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:39.647 11:03:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:39.647 11:03:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.647 11:03:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:39.647 11:03:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:39.647 11:03:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:39.647 11:03:45 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:39.906 request: 00:04:39.906 { 00:04:39.907 "method": "env_dpdk_get_mem_stats", 00:04:39.907 "req_id": 1 00:04:39.907 } 00:04:39.907 Got JSON-RPC error response 00:04:39.907 response: 00:04:39.907 { 00:04:39.907 "code": -32601, 00:04:39.907 "message": "Method not found" 00:04:39.907 } 00:04:39.907 11:03:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:39.907 11:03:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.907 11:03:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.907 11:03:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.907 11:03:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3184664 00:04:39.907 11:03:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3184664 ']' 00:04:39.907 11:03:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3184664 00:04:39.907 11:03:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:39.907 11:03:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.907 11:03:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3184664 00:04:39.907 11:03:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.907 11:03:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.907 11:03:46 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3184664' 00:04:39.907 killing process with pid 3184664 00:04:39.907 11:03:46 app_cmdline -- common/autotest_common.sh@973 -- # kill 3184664 00:04:39.907 11:03:46 app_cmdline -- common/autotest_common.sh@978 -- # wait 3184664 00:04:40.167 00:04:40.167 real 0m1.743s 00:04:40.167 user 0m2.079s 00:04:40.167 sys 0m0.462s 00:04:40.167 11:03:46 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.167 11:03:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:40.167 ************************************ 00:04:40.167 END TEST app_cmdline 00:04:40.167 ************************************ 00:04:40.167 11:03:46 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:40.167 11:03:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.167 11:03:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.167 11:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:40.167 ************************************ 00:04:40.167 START TEST version 00:04:40.167 ************************************ 00:04:40.167 11:03:46 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:40.428 * Looking for test storage... 00:04:40.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:40.428 11:03:46 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.428 11:03:46 version -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.428 11:03:46 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.428 11:03:46 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.428 11:03:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.428 11:03:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.428 11:03:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.428 11:03:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.428 11:03:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.428 11:03:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.428 11:03:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.428 11:03:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.428 11:03:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.428 11:03:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.428 11:03:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.428 11:03:46 version -- scripts/common.sh@344 -- # case "$op" in 00:04:40.428 11:03:46 version -- scripts/common.sh@345 -- # : 1 00:04:40.428 11:03:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.428 11:03:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.428 11:03:46 version -- scripts/common.sh@365 -- # decimal 1 00:04:40.428 11:03:46 version -- scripts/common.sh@353 -- # local d=1 00:04:40.428 11:03:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.428 11:03:46 version -- scripts/common.sh@355 -- # echo 1 00:04:40.428 11:03:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.428 11:03:46 version -- scripts/common.sh@366 -- # decimal 2 00:04:40.428 11:03:46 version -- scripts/common.sh@353 -- # local d=2 00:04:40.428 11:03:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.428 11:03:46 version -- scripts/common.sh@355 -- # echo 2 00:04:40.428 11:03:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.428 11:03:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.428 11:03:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.428 11:03:46 version -- scripts/common.sh@368 -- # return 0 00:04:40.428 11:03:46 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.428 11:03:46 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.428 --rc genhtml_branch_coverage=1 00:04:40.428 --rc genhtml_function_coverage=1 00:04:40.428 --rc genhtml_legend=1 00:04:40.428 --rc geninfo_all_blocks=1 00:04:40.428 --rc geninfo_unexecuted_blocks=1 00:04:40.428 00:04:40.428 ' 00:04:40.428 11:03:46 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.428 --rc genhtml_branch_coverage=1 00:04:40.428 --rc genhtml_function_coverage=1 00:04:40.428 --rc genhtml_legend=1 00:04:40.428 --rc geninfo_all_blocks=1 00:04:40.428 --rc geninfo_unexecuted_blocks=1 00:04:40.428 00:04:40.428 ' 00:04:40.428 11:03:46 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.428 --rc genhtml_branch_coverage=1 00:04:40.428 --rc genhtml_function_coverage=1 00:04:40.428 --rc genhtml_legend=1 00:04:40.428 --rc geninfo_all_blocks=1 00:04:40.428 --rc geninfo_unexecuted_blocks=1 00:04:40.428 00:04:40.428 ' 00:04:40.428 11:03:46 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.428 --rc genhtml_branch_coverage=1 00:04:40.428 --rc genhtml_function_coverage=1 00:04:40.428 --rc genhtml_legend=1 00:04:40.428 --rc geninfo_all_blocks=1 00:04:40.428 --rc geninfo_unexecuted_blocks=1 00:04:40.428 00:04:40.428 ' 00:04:40.428 11:03:46 version -- app/version.sh@17 -- # get_header_version major 00:04:40.428 11:03:46 version -- app/version.sh@14 -- # tr -d '"' 00:04:40.428 11:03:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:40.428 11:03:46 version -- app/version.sh@14 -- # cut -f2 00:04:40.428 11:03:46 version -- app/version.sh@17 -- # major=25 00:04:40.428 11:03:46 version -- app/version.sh@18 -- # get_header_version minor 00:04:40.428 11:03:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:40.428 11:03:46 version -- app/version.sh@14 -- # cut -f2 00:04:40.428 11:03:46 version -- app/version.sh@14 -- # tr -d '"' 00:04:40.428 11:03:46 version -- app/version.sh@18 -- # minor=1 00:04:40.428 11:03:46 version -- app/version.sh@19 -- # get_header_version patch 00:04:40.428 11:03:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:40.428 11:03:46 version -- app/version.sh@14 -- # cut -f2 00:04:40.428 11:03:46 version -- app/version.sh@14 -- # tr -d '"' 00:04:40.428 11:03:46 version -- app/version.sh@19 -- # patch=0 00:04:40.428 11:03:46 version -- app/version.sh@20 -- # get_header_version suffix 00:04:40.428 11:03:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:40.428 11:03:46 version -- app/version.sh@14 -- # cut -f2 00:04:40.428 11:03:46 version -- app/version.sh@14 -- # tr -d '"' 00:04:40.428 11:03:46 version -- app/version.sh@20 -- # suffix=-pre 00:04:40.428 11:03:46 version -- app/version.sh@22 -- # version=25.1 00:04:40.428 11:03:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:40.428 11:03:46 version -- app/version.sh@28 -- # version=25.1rc0 00:04:40.428 11:03:46 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:40.428 11:03:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:40.428 11:03:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:40.428 11:03:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:40.428 00:04:40.428 real 0m0.270s 00:04:40.428 user 0m0.152s 00:04:40.428 sys 0m0.161s 00:04:40.428 11:03:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.428 11:03:46 version -- common/autotest_common.sh@10 -- # set +x 00:04:40.428 ************************************ 00:04:40.428 END TEST version 00:04:40.428 ************************************ 00:04:40.428 11:03:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:40.428 11:03:46 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:40.428 11:03:46 -- spdk/autotest.sh@194 -- # uname -s 00:04:40.428 11:03:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:40.428 11:03:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:40.428 11:03:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:40.428 11:03:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:40.689 11:03:46 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:40.689 11:03:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:40.689 11:03:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.689 11:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:40.689 11:03:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:40.689 11:03:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:40.689 11:03:46 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:40.689 11:03:46 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:40.689 11:03:46 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:40.689 11:03:46 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:40.689 11:03:46 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:40.689 11:03:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:40.689 11:03:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.689 11:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:40.689 ************************************ 00:04:40.689 START TEST nvmf_tcp 00:04:40.689 ************************************ 00:04:40.689 11:03:46 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:40.689 * Looking for test storage... 00:04:40.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:40.689 11:03:46 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.689 11:03:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.689 11:03:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.689 11:03:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:40.689 11:03:46 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.950 11:03:46 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:40.950 11:03:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:40.950 11:03:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.950 11:03:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:40.950 11:03:46 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.950 11:03:46 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.950 11:03:46 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.950 11:03:46 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:40.950 11:03:46 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.950 11:03:46 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.950 --rc genhtml_branch_coverage=1 00:04:40.950 --rc genhtml_function_coverage=1 00:04:40.950 --rc genhtml_legend=1 00:04:40.950 --rc geninfo_all_blocks=1 00:04:40.950 --rc geninfo_unexecuted_blocks=1 00:04:40.950 00:04:40.950 ' 00:04:40.950 11:03:46 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.950 --rc genhtml_branch_coverage=1 00:04:40.950 --rc genhtml_function_coverage=1 00:04:40.950 --rc genhtml_legend=1 00:04:40.950 --rc geninfo_all_blocks=1 00:04:40.950 --rc geninfo_unexecuted_blocks=1 00:04:40.950 00:04:40.950 ' 00:04:40.950 11:03:46 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.950 --rc genhtml_branch_coverage=1 00:04:40.950 --rc genhtml_function_coverage=1 00:04:40.950 --rc genhtml_legend=1 00:04:40.950 --rc geninfo_all_blocks=1 00:04:40.950 --rc geninfo_unexecuted_blocks=1 00:04:40.950 00:04:40.950 ' 00:04:40.950 11:03:46 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.950 --rc genhtml_branch_coverage=1 00:04:40.950 --rc genhtml_function_coverage=1 00:04:40.950 --rc genhtml_legend=1 00:04:40.950 --rc geninfo_all_blocks=1 00:04:40.950 --rc geninfo_unexecuted_blocks=1 00:04:40.950 00:04:40.950 ' 00:04:40.950 11:03:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:40.950 11:03:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:40.950 11:03:46 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:40.950 11:03:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:40.950 11:03:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.950 11:03:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.950 ************************************ 00:04:40.950 START TEST nvmf_target_core 00:04:40.950 ************************************ 00:04:40.950 11:03:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:40.950 * Looking for test storage... 00:04:40.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:40.950 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.951 --rc genhtml_branch_coverage=1 00:04:40.951 --rc genhtml_function_coverage=1 00:04:40.951 --rc genhtml_legend=1 00:04:40.951 --rc geninfo_all_blocks=1 00:04:40.951 --rc geninfo_unexecuted_blocks=1 00:04:40.951 00:04:40.951 ' 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.951 --rc genhtml_branch_coverage=1 00:04:40.951 --rc genhtml_function_coverage=1 00:04:40.951 --rc genhtml_legend=1 00:04:40.951 --rc geninfo_all_blocks=1 00:04:40.951 --rc geninfo_unexecuted_blocks=1 00:04:40.951 00:04:40.951 ' 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.951 --rc genhtml_branch_coverage=1 00:04:40.951 --rc genhtml_function_coverage=1 00:04:40.951 --rc genhtml_legend=1 00:04:40.951 --rc geninfo_all_blocks=1 00:04:40.951 --rc geninfo_unexecuted_blocks=1 00:04:40.951 00:04:40.951 ' 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.951 --rc genhtml_branch_coverage=1 00:04:40.951 --rc genhtml_function_coverage=1 00:04:40.951 --rc genhtml_legend=1 00:04:40.951 --rc geninfo_all_blocks=1 00:04:40.951 --rc geninfo_unexecuted_blocks=1 00:04:40.951 00:04:40.951 ' 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:40.951 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:41.213 ************************************ 00:04:41.213 START TEST nvmf_abort 00:04:41.213 ************************************ 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:41.213 * Looking for test storage... 00:04:41.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.213 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.214 --rc genhtml_branch_coverage=1 00:04:41.214 --rc genhtml_function_coverage=1 00:04:41.214 --rc genhtml_legend=1 00:04:41.214 --rc geninfo_all_blocks=1 00:04:41.214 --rc geninfo_unexecuted_blocks=1 00:04:41.214 00:04:41.214 ' 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.214 --rc genhtml_branch_coverage=1 00:04:41.214 --rc genhtml_function_coverage=1 00:04:41.214 --rc genhtml_legend=1 00:04:41.214 --rc geninfo_all_blocks=1 00:04:41.214 --rc geninfo_unexecuted_blocks=1 00:04:41.214 00:04:41.214 ' 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.214 --rc genhtml_branch_coverage=1 00:04:41.214 --rc genhtml_function_coverage=1 00:04:41.214 --rc genhtml_legend=1 00:04:41.214 --rc geninfo_all_blocks=1 00:04:41.214 --rc geninfo_unexecuted_blocks=1 00:04:41.214 00:04:41.214 ' 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.214 --rc genhtml_branch_coverage=1 00:04:41.214 --rc genhtml_function_coverage=1 00:04:41.214 --rc genhtml_legend=1 00:04:41.214 --rc geninfo_all_blocks=1 00:04:41.214 --rc geninfo_unexecuted_blocks=1 00:04:41.214 00:04:41.214 ' 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.214 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.475 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:41.476 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:49.621 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:04:49.622 Found 0000:31:00.0 (0x8086 - 0x159b) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:04:49.622 Found 0000:31:00.1 (0x8086 - 0x159b) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:04:49.622 Found net devices under 0000:31:00.0: cvl_0_0 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:04:49.622 Found net devices under 0000:31:00.1: cvl_0_1 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:49.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:49.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:04:49.622 00:04:49.622 --- 10.0.0.2 ping statistics --- 00:04:49.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:49.622 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:49.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:49.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:04:49.622 00:04:49.622 --- 10.0.0.1 ping statistics --- 00:04:49.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:49.622 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3189680 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3189680 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3189680 ']' 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:49.622 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:49.622 [2024-12-06 11:03:55.454787] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:49.623 [2024-12-06 11:03:55.454852] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:49.623 [2024-12-06 11:03:55.563794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:49.623 [2024-12-06 11:03:55.617668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:49.623 [2024-12-06 11:03:55.617719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:49.623 [2024-12-06 11:03:55.617728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.623 [2024-12-06 11:03:55.617736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.623 [2024-12-06 11:03:55.617742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:49.623 [2024-12-06 11:03:55.619883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.623 [2024-12-06 11:03:55.620036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.623 [2024-12-06 11:03:55.620036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.195 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.195 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:50.195 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:50.195 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.195 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.196 [2024-12-06 11:03:56.308815] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.196 Malloc0 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.196 Delay0 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.196 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.457 [2024-12-06 11:03:56.386055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.457 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:50.457 [2024-12-06 11:03:56.557043] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:53.008 Initializing NVMe Controllers 00:04:53.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:53.008 controller IO queue size 128 less than required 00:04:53.008 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:53.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:53.008 Initialization complete. Launching workers. 00:04:53.008 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28736 00:04:53.008 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28797, failed to submit 62 00:04:53.008 success 28740, unsuccessful 57, failed 0 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:53.008 rmmod nvme_tcp 00:04:53.008 rmmod nvme_fabrics 00:04:53.008 rmmod nvme_keyring 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3189680 ']' 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3189680 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3189680 ']' 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3189680 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3189680 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3189680' 00:04:53.008 killing process with pid 3189680 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3189680 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3189680 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:53.008 11:03:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:54.925 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:54.925 00:04:54.925 real 0m13.773s 00:04:54.925 user 0m13.784s 00:04:54.925 sys 0m6.949s 00:04:54.925 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.925 11:04:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.925 ************************************ 00:04:54.925 END TEST nvmf_abort 00:04:54.925 ************************************ 00:04:54.925 11:04:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:54.925 11:04:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:54.925 11:04:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.925 11:04:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:54.925 ************************************ 00:04:54.925 START TEST nvmf_ns_hotplug_stress 00:04:54.925 ************************************ 00:04:54.925 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:55.186 * Looking for test storage... 00:04:55.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:55.186 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.187 --rc genhtml_branch_coverage=1 00:04:55.187 --rc genhtml_function_coverage=1 00:04:55.187 --rc genhtml_legend=1 00:04:55.187 --rc geninfo_all_blocks=1 00:04:55.187 --rc geninfo_unexecuted_blocks=1 00:04:55.187 00:04:55.187 ' 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.187 --rc genhtml_branch_coverage=1 00:04:55.187 --rc genhtml_function_coverage=1 00:04:55.187 --rc genhtml_legend=1 00:04:55.187 --rc geninfo_all_blocks=1 00:04:55.187 --rc geninfo_unexecuted_blocks=1 00:04:55.187 00:04:55.187 ' 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.187 --rc genhtml_branch_coverage=1 00:04:55.187 --rc genhtml_function_coverage=1 00:04:55.187 --rc genhtml_legend=1 00:04:55.187 --rc geninfo_all_blocks=1 00:04:55.187 --rc geninfo_unexecuted_blocks=1 00:04:55.187 00:04:55.187 ' 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.187 --rc genhtml_branch_coverage=1 00:04:55.187 --rc genhtml_function_coverage=1 00:04:55.187 --rc genhtml_legend=1 00:04:55.187 --rc geninfo_all_blocks=1 00:04:55.187 --rc geninfo_unexecuted_blocks=1 00:04:55.187 00:04:55.187 ' 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.187 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:55.188 11:04:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:03.333 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:03.333 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:03.333 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:03.334 Found net devices under 0000:31:00.0: cvl_0_0 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:03.334 Found net devices under 0000:31:00.1: cvl_0_1 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:03.334 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:03.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:03.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:05:03.596 00:05:03.596 --- 10.0.0.2 ping statistics --- 00:05:03.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:03.596 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:03.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:03.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:05:03.596 00:05:03.596 --- 10.0.0.1 ping statistics --- 00:05:03.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:03.596 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3195083 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3195083 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3195083 ']' 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.596 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:03.596 [2024-12-06 11:04:09.728912] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:03.596 [2024-12-06 11:04:09.728969] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:03.858 [2024-12-06 11:04:09.837244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.858 [2024-12-06 11:04:09.888411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:03.858 [2024-12-06 11:04:09.888465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:03.858 [2024-12-06 11:04:09.888474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:03.858 [2024-12-06 11:04:09.888481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:03.858 [2024-12-06 11:04:09.888488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:03.858 [2024-12-06 11:04:09.890331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.858 [2024-12-06 11:04:09.890502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.858 [2024-12-06 11:04:09.890504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.430 11:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.430 11:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:04.430 11:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:04.430 11:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.430 11:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:04.430 11:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:04.430 11:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:04.430 11:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:04.691 [2024-12-06 11:04:10.719279] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.691 11:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:04.952 11:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:04.952 [2024-12-06 11:04:11.088725] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:05.212 11:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:05.212 11:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:05.473 Malloc0 00:05:05.473 11:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:05.473 Delay0 00:05:05.733 11:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.733 11:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:05.992 NULL1 00:05:05.992 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:06.252 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3195613 00:05:06.252 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:06.252 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:06.252 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.252 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.510 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:06.510 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:06.767 true 00:05:06.767 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:06.767 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.767 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.026 11:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:07.026 11:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:07.298 true 00:05:07.298 11:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:07.298 11:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.298 11:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.557 11:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:07.557 11:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:07.818 true 00:05:07.818 11:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:07.818 11:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.078 11:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.078 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:08.078 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:08.338 true 00:05:08.338 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:08.338 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.598 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.598 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:08.598 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:08.858 true 00:05:08.858 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:08.858 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.120 11:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.120 11:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:09.121 11:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:09.383 true 00:05:09.383 11:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:09.383 11:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.767 Read completed with error (sct=0, sc=11) 00:05:10.767 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.767 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:10.767 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:11.030 true 00:05:11.030 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:11.030 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.974 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.974 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:11.974 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:12.235 true 00:05:12.235 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:12.235 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.236 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.560 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:12.560 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:12.912 true 00:05:12.912 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:12.912 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.912 11:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.232 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:13.232 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:13.232 true 00:05:13.232 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:13.232 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.493 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.493 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:13.493 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:13.754 true 00:05:13.754 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:13.754 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.138 11:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.138 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:15.138 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:15.138 true 00:05:15.398 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:15.398 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.972 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.232 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:16.232 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:16.493 true 00:05:16.493 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:16.493 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.753 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.753 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:16.753 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:17.014 true 00:05:17.014 11:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:17.014 11:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.399 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.399 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:18.399 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:18.399 true 00:05:18.399 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:18.399 11:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.340 11:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.599 11:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:19.599 11:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:19.599 true 00:05:19.599 11:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:19.599 11:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.859 11:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.119 11:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:20.119 11:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:20.379 true 00:05:20.379 11:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:20.379 11:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.379 11:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.639 11:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:20.639 11:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:20.900 true 00:05:20.900 11:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:20.900 11:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.900 11:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.160 11:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:21.160 11:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:21.421 true 00:05:21.421 11:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:21.421 11:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.682 11:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.682 11:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:21.682 11:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:21.942 true 00:05:21.942 11:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:21.942 11:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.202 11:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.202 11:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:22.202 11:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:22.462 true 00:05:22.462 11:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:22.462 11:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.722 11:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.722 11:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:22.722 11:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:22.982 true 00:05:22.982 11:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:22.982 11:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.242 11:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.242 11:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:23.242 11:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:23.503 true 00:05:23.503 11:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:23.503 11:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.763 11:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.022 11:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:24.022 11:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:24.022 true 00:05:24.022 11:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:24.022 11:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.282 11:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.544 11:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:24.544 11:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:24.544 true 00:05:24.544 11:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:24.544 11:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.803 11:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.064 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:25.065 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:25.065 true 00:05:25.323 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:25.323 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.323 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.584 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:25.584 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:25.844 true 00:05:25.844 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:25.844 11:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.784 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.043 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:27.044 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:27.304 true 00:05:27.304 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:27.304 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.246 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.246 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:28.246 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:28.507 true 00:05:28.507 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:28.507 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.769 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.769 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:28.769 11:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:29.029 true 00:05:29.029 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:29.029 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.289 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.289 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:29.289 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:29.549 true 00:05:29.549 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:29.549 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.809 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.809 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:29.809 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:30.069 true 00:05:30.069 11:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:30.069 11:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.454 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.454 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:31.454 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:31.714 true 00:05:31.714 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:31.714 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.656 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.656 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:32.656 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:32.656 true 00:05:32.917 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:32.917 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.917 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.178 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:33.178 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:33.438 true 00:05:33.439 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:33.439 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.439 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.733 [2024-12-06 11:04:39.716065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.716983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.717991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.733 [2024-12-06 11:04:39.718876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.718904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.718930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.718957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.718984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.719997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.720971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.721971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.734 [2024-12-06 11:04:39.722497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.722974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.723989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.724780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.725961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.726006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.735 [2024-12-06 11:04:39.726032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.726999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.727996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.728996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.729026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.729057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.736 [2024-12-06 11:04:39.729085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.729994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.730976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.731985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.732012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.737 [2024-12-06 11:04:39.732040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.732999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.733999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.734993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.735023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.735055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.735082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.735114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.735140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.735172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.738 [2024-12-06 11:04:39.735200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.735933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:33.739 [2024-12-06 11:04:39.735976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.736985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.737997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.739 [2024-12-06 11:04:39.738717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.738747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.738783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.738810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.738842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.738877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.738907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.738941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.738969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.738996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.739816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.740995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.740 [2024-12-06 11:04:39.741985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.742977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.743999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.744974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.741 [2024-12-06 11:04:39.745844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.745880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.745911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.745940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.745970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.745998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:33.742 [2024-12-06 11:04:39.746031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 11:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:33.742 [2024-12-06 11:04:39.746402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.746859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.747974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.742 [2024-12-06 11:04:39.748575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.748989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.749877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.750963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.751973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.743 [2024-12-06 11:04:39.752517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.752988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.753898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.754979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.744 [2024-12-06 11:04:39.755835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.755872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.755911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.755942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.755974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.756007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.756039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.756069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.756097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.756136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.756166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.757980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.758983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.759977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.760005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.745 [2024-12-06 11:04:39.760067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.760980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.761337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.746 [2024-12-06 11:04:39.762909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.762937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.762970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.762995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.763994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.764999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.747 [2024-12-06 11:04:39.765406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.765806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.766974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.767977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.768006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.768046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.768076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.768103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.768132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.768867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.768901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.768933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.768963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.748 [2024-12-06 11:04:39.769434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.769968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.770853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.771974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.749 [2024-12-06 11:04:39.772004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.772977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:33.750 [2024-12-06 11:04:39.773493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.773970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.774972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.775003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.775034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.775064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.775094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.775125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.775157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.750 [2024-12-06 11:04:39.775191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.775219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.775250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.775283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.775311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.775347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.775929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.775962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.776981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.777929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.751 [2024-12-06 11:04:39.778511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.778544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.778828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.778861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.778905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.778935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.778961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.778992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.779976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.780883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.781076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.781106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.781147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.781179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.752 [2024-12-06 11:04:39.781210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.781995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.782991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.783994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.753 [2024-12-06 11:04:39.784621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.784997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.785978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.786989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.787021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.787050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.787077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.787108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.787138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.787168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.787198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.787230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.787261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.754 [2024-12-06 11:04:39.787290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.787739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.788993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.789993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.755 [2024-12-06 11:04:39.790982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.791997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.792988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.793975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.756 [2024-12-06 11:04:39.794347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.794837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.795969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.796996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.797023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.797054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.797407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.797436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.797466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.757 [2024-12-06 11:04:39.797495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.797997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.798967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.799972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.758 [2024-12-06 11:04:39.800911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.800940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.800972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.801725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.802988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.803972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.759 [2024-12-06 11:04:39.804699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.804728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.804758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.804794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.804823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.804850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.804882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.804923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.804956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.804986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.805994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.806982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.760 [2024-12-06 11:04:39.807845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.807882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.807913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.807951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.807983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.808982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:33.761 [2024-12-06 11:04:39.809259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.809988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.810855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.761 [2024-12-06 11:04:39.811197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.811988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.812996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.813976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.762 [2024-12-06 11:04:39.814504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.814977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.815996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.816989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.817954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.818149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.818188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.818217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.818247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.763 [2024-12-06 11:04:39.818285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.818978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.819991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.820981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.764 [2024-12-06 11:04:39.821707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.821732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.821757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.821782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.821807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.821842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.821879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.821911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.821942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.821975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.822975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.823984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.824708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.825077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.825108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.825143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.825172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.825203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.825232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.825262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.825292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.825325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.765 [2024-12-06 11:04:39.825358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.825988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.826983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.827983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.766 [2024-12-06 11:04:39.828359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.828992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.829015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.829041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.829066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.829090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.829115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.829842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.829880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.829909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.829941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.829972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.830970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.831895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.832039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.832069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.767 [2024-12-06 11:04:39.832101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.832970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.833984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.834994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.768 [2024-12-06 11:04:39.835552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.835982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.836953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.837982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.838699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.839064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.839105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.839136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.839166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.839200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.839230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.839264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.769 [2024-12-06 11:04:39.839294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.839987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.840973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.841986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.842014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.842043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.842069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.842096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.842130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.770 [2024-12-06 11:04:39.842161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.842984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.843994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.844997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.771 [2024-12-06 11:04:39.845434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.845776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.845819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.845848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.845883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.845912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.845941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.845970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.845998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:33.772 [2024-12-06 11:04:39.846093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.846990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.847675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.848993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.772 [2024-12-06 11:04:39.849031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:33.773 [2024-12-06 11:04:39.849606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.085013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.085186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.085309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.085433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.085545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.085661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.085780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.085912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.086031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.086158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.086273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.086393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.089048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.089197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.089329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.089449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.089568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.089687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.089799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.089933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.090055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.090175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.090291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.090413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.090536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.049 [2024-12-06 11:04:40.090667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.090796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.090938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.091061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.091184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.091302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.091421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.091542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.091676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.091801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.091927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.092052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.092165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.092279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.092403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.092520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.092637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.092775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.092899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.093021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.093140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.093255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.093371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.093492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.093609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.093731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.093854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.093978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.094106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.094235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.094366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.094479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.094600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.094720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.094840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.094967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.095085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.095204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.095331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.095459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.095588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.095710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.095825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.095950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.096075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.096193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.096316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.096438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.096554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.096684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.096807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.097376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.097499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.097618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.097743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.097871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.097993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.098109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.098227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.098341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.098465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.098567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.098689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.098803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.098931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.099065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.099192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.099325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.099451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.099579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.099710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.099843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.099941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.100025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.100117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.100200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.100283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.100371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.100460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.100558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.100647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.100738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.100840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.100940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.101031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.101115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.101208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.101295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.101383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.101477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.101564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.101654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.101741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.101828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.101919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.102003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.050 [2024-12-06 11:04:40.102090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.102176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.102259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.102347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.102441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.102526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.102609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.102695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.102781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.102873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.102962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.103049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.103138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.103223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.103310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.103394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.103478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.103558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.105291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.105404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.105491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 true 00:05:34.051 [2024-12-06 11:04:40.105583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.105669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.105763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.105853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.105950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.106989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.107077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.107173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.107257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.107350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.107436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.107535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.107620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.107718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.107806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.107907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.107993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.108082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.108174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.108272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.108369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.108456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.108538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.108619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.108706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.108789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.108881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.108962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.109971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.110062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.110147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.110240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.110332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.110430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.110516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.110600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.110665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.110712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.110761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.111036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.111103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.111161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.111219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.111278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.111343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.111410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.111465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.051 [2024-12-06 11:04:40.111520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.111576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.111632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.111691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.111749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.111806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.111871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.111932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.111996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.112985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.113960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.114754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.115945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.116995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.117955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.118010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.118067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.118123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.118181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.118254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.052 [2024-12-06 11:04:40.118312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.118371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.118430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.118490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.118551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.118615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.118674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.118736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.118797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.118856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.118920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.118987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.119750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.120988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.121988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.122944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.123004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.123065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.123122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.123178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.123231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.123269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.123307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.123349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.123391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.123429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.123477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.124263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.124309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.124352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.124392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.124430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.124468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.124507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.124547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.124586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.053 [2024-12-06 11:04:40.124627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.124668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.124707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.124750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.124797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.124840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.124885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.124927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.124970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.125968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.126828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.127985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.128027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.128082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.128131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.128171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.128212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.054 [2024-12-06 11:04:40.128251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.128980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.129704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.130995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.131996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.055 [2024-12-06 11:04:40.132914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.132956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.132999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.133966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:34.056 [2024-12-06 11:04:40.134216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.056 [2024-12-06 11:04:40.134638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.134987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.135984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.056 [2024-12-06 11:04:40.136815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.136849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.136884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.136916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.136945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.136979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.137977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.138998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.057 [2024-12-06 11:04:40.139960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.139984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.140984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.141979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.142972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.143002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.143034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.143065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.143100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.143129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.143155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.058 [2024-12-06 11:04:40.143203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.143960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.144991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 Message suppressed 999 times: [2024-12-06 11:04:40.145114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 Read completed with error (sct=0, sc=15) 00:05:34.059 [2024-12-06 11:04:40.145145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.145997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.059 [2024-12-06 11:04:40.146821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.146850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.146883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.146914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.146943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.146971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.146998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.147995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.148986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.149989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.150015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.150041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.150071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.150099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.150126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.150157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.150188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.150218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.150257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.150285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.060 [2024-12-06 11:04:40.150314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.150979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.151970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.152969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.061 [2024-12-06 11:04:40.153743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.153776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.153810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.153839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.153871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.153894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.153926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.154997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.155993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.156984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.062 [2024-12-06 11:04:40.157520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.157997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.158632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.159983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.063 [2024-12-06 11:04:40.160837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.160883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.160914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.160943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.160975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.161979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.162996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.163990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.164020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.164049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.164080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.164108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.164140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.164170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.064 [2024-12-06 11:04:40.164202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.164979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.165665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.166978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.065 [2024-12-06 11:04:40.167663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.167692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.167721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.167750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.167807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.167836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.167871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.167902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.167935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.167968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.167998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.168996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.169997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.170973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.066 [2024-12-06 11:04:40.171640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.171676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.171705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.171736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.171774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.171804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.171834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.171868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.171896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.171925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.171955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.171986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.172995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.173984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.067 [2024-12-06 11:04:40.174847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.174883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.174914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.174948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.174979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.175871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.176974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.177981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.178015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.178043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.178073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.178103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.178134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.178168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.068 [2024-12-06 11:04:40.178197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.178969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.179965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.180990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.069 [2024-12-06 11:04:40.181719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.181744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.181770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.181795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.181819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.181845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.181874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.181900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.181926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.181951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.181981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:34.070 [2024-12-06 11:04:40.182225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.182835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.183970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.184994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.185024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.185061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.185090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.185128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.070 [2024-12-06 11:04:40.185157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.185985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.186995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.187974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.071 [2024-12-06 11:04:40.188440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.188973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.189987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.190981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.191010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.191040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.191069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.191100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.072 [2024-12-06 11:04:40.191141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.191987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.192976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.193980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.194977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.195011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.195042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.195070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.195096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.073 [2024-12-06 11:04:40.195122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.195984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.196979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.197981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.198011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.198048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.198078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.198107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.074 [2024-12-06 11:04:40.198143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.198982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.199985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.200890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.201606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.201637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.075 [2024-12-06 11:04:40.201668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.201701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.201730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.201770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.201800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.201829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.201878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.201907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.201938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.201970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.359 [2024-12-06 11:04:40.202758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.202810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.202841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.202876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.202913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.202943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.202979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.203987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.360 [2024-12-06 11:04:40.204935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.204976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.205731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.206979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.361 [2024-12-06 11:04:40.207578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.207965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.208978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.362 [2024-12-06 11:04:40.209814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.209846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.209878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.209909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.209950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.209980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.210999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.211969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.212000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.212032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.212064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.212098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.212123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.363 [2024-12-06 11:04:40.212155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.212875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.213984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.364 [2024-12-06 11:04:40.214517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.214994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.215992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.216988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.217017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.217048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.217078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.217108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.217139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.217171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.365 [2024-12-06 11:04:40.217203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.217993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.218997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:34.366 [2024-12-06 11:04:40.219156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.366 [2024-12-06 11:04:40.219783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.219814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.220993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.221994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.222024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.222053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.222081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.222109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.222151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.223988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.367 [2024-12-06 11:04:40.224018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.224967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.225990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.226974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.227004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.227043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.227078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.227109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.227134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.227170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.227513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.368 [2024-12-06 11:04:40.227543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.227980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.228985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.229410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.230969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.231000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.231028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.231075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.231107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.231140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.231170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.231200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.369 [2024-12-06 11:04:40.231228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.231975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.232974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.370 [2024-12-06 11:04:40.233974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.234987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.235986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.236991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.371 [2024-12-06 11:04:40.237709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.237742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.237778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.237809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.237840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.237878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.237909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.237937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.237969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.237997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.238853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.239986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.240961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.241006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.241036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.241066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.372 [2024-12-06 11:04:40.241099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.241971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.242980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.243339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.244990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.245020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.373 [2024-12-06 11:04:40.245050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.245999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.246996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.247996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.374 [2024-12-06 11:04:40.248941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.248973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.249981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.250965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.375 [2024-12-06 11:04:40.251870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.251906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.251933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.251965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.251996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.252976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.253998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.376 [2024-12-06 11:04:40.254860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.254892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.254925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.254962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.254992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.255972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:34.377 [2024-12-06 11:04:40.256131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.256979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.257816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.258166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.258199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.258230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.377 [2024-12-06 11:04:40.258261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.258994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.259987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.260992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.378 [2024-12-06 11:04:40.261969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.262995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.263982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.379 [2024-12-06 11:04:40.264429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.264888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.265983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.266964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.267000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.267034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.267062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.267092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.267123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.267151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.267179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.267210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.267574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.267606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.380 [2024-12-06 11:04:40.267636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.267698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.267728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.267757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.267789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.267823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.267854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.267887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.267919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.267950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.267983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.268990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.269649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.381 [2024-12-06 11:04:40.270644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.270673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.270704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.270735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.270772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.270803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.270833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.270866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.270898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.270930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.270961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.270993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.271958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.272808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.273971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.382 [2024-12-06 11:04:40.274002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.274983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.275997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.383 [2024-12-06 11:04:40.276743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.276875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.276908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.276942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.276973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.277992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.278986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.279983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.280017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.280049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.280078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.280110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.280142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.280174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.280203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.280234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.280265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.384 [2024-12-06 11:04:40.280293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.280999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.281998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.282987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.385 [2024-12-06 11:04:40.283405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.283971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.284996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.285975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.386 [2024-12-06 11:04:40.286803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.286834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.286874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.286905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.286935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.287980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.288979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.289992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.387 [2024-12-06 11:04:40.290372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.290976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.291428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.388 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.388 [2024-12-06 11:04:40.482178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.482967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.483997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.484048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.484077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.484112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.484142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.388 [2024-12-06 11:04:40.484173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.484976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.485978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.486990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.389 [2024-12-06 11:04:40.487449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.487925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.488985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.489985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.390 [2024-12-06 11:04:40.490620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.490980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.491967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:34.391 [2024-12-06 11:04:40.492669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.492980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.391 [2024-12-06 11:04:40.493838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.493870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.493909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.493938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.493970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.493999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.494948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.495980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.496986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.392 [2024-12-06 11:04:40.497523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.497975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.498986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.499996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.393 [2024-12-06 11:04:40.500733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.500761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.500788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.500816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.500843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.500875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.500904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.500933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.500962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.500992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.501977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.502983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.394 [2024-12-06 11:04:40.503956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.503984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.395 [2024-12-06 11:04:40.504440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.504990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.505730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.684 [2024-12-06 11:04:40.506989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.507994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.508991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.685 [2024-12-06 11:04:40.509824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.509848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.509876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.509901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.509925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.509949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.509974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.510995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.511981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.686 [2024-12-06 11:04:40.512830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.512865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.512900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.512928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.512959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.512987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.513992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.514703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:34.687 [2024-12-06 11:04:40.515406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:34.687 [2024-12-06 11:04:40.515555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.687 [2024-12-06 11:04:40.515784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.515816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.515841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.515870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.515896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.515932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.515963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.515993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.516986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.517970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.518000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.518065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.518094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.518125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.518158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.518188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.518226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.518258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.688 [2024-12-06 11:04:40.518291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.518974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.519954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.689 [2024-12-06 11:04:40.520719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.520745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.520776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.520814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.520843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.520878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.520907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.520936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.520968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.521667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.522993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.690 [2024-12-06 11:04:40.523494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.523845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.524988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.525996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.691 [2024-12-06 11:04:40.526745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.526773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.526820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.526850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.526884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.526913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.526943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:34.692 [2024-12-06 11:04:40.526985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.527970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.528983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.692 [2024-12-06 11:04:40.529673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.529706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.529737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.529767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.529799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.529829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.529859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.529897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.529930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.529958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.529989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.530932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.531996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.693 [2024-12-06 11:04:40.532597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.532973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.533993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.534986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.694 [2024-12-06 11:04:40.535389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.535418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.535448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.535478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.535505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.535535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.535566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.535596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.535625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.535655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.535690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.536986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.537985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.695 [2024-12-06 11:04:40.538016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.538994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.539977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.540993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.541022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.541050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.541082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.541112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.541139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.696 [2024-12-06 11:04:40.541170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.541998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.542658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.543978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.544007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.544034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.544064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.544089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.544121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.697 [2024-12-06 11:04:40.544149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.544988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.545975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.546994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.698 [2024-12-06 11:04:40.547500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.547530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.547564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.547593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.547809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.547840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.547871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.547911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.547944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.547974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.548972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.549796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.550987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.551015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.699 [2024-12-06 11:04:40.551040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.551981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.552997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.553025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.553053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.553083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.553113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.553145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.553174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.553234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.700 [2024-12-06 11:04:40.553263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.553984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.554657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.555982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.556010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.556041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.556069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.556099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.701 [2024-12-06 11:04:40.556130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.556921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.557971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.702 [2024-12-06 11:04:40.558928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.558959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.558991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.559984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.560997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.561983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.562010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.562041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.562073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.703 [2024-12-06 11:04:40.562102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.562998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.563721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:34.704 [2024-12-06 11:04:40.564827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.564986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.565017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.565066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.565097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.565129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.704 [2024-12-06 11:04:40.565158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.565974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.566982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.705 [2024-12-06 11:04:40.567429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.567996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.568578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.569973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.706 [2024-12-06 11:04:40.570471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.570980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.571980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.572992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.573022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.573084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.573119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.573149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.573177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.707 [2024-12-06 11:04:40.573206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.573979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.574981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.575648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.708 [2024-12-06 11:04:40.576444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.576999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.577981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.578985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.579013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.579043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.579071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.579101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.579129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.579160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.579189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.579219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.579251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.709 [2024-12-06 11:04:40.579283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.579995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.580967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.581983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.582013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.582042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.582070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.582107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.710 [2024-12-06 11:04:40.582134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.582678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.583998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.584027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.584059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.584088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.584121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.711 [2024-12-06 11:04:40.584153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.584975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.585975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.586994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.587984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.588999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.712 [2024-12-06 11:04:40.589314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.589339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.589368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.589400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.589431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.589458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.589496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.589528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.589900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.589960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.589989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.590995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.591928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.592986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.593985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.594996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.595989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.596020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.713 [2024-12-06 11:04:40.596056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.596530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.597982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.598997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.599993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.600984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.601991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.714 [2024-12-06 11:04:40.602023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:34.715 [2024-12-06 11:04:40.602052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.602985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.603983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.604982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.605986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.606974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.607878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.715 [2024-12-06 11:04:40.608607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.608640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.608668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.608704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.608734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.608769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.608801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.608831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.608872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.608905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.608937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.608977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.609775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.610983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.611991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.612967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.613972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.716 [2024-12-06 11:04:40.614725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.614755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.614784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.614811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.614851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.614884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.614921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.614952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.615990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.616982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.617010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.617038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.617070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.617100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.617131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.617158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.617186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.617219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.617251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.617986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.618992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.619998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.620979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.717 [2024-12-06 11:04:40.621566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.621965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.622976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.623996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.624314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.625987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.626996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.627991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.718 [2024-12-06 11:04:40.628626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.628657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.628687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.628722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.628753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.628781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.628812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.628841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.628870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.628904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.628946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.628976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.629970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.630798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.631997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.632987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.633936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.719 [2024-12-06 11:04:40.634834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.634865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.634894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.634926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.634964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.634992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.635993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.636972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.637968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:34.720 [2024-12-06 11:04:40.638341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.638988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.639990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.640989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.720 [2024-12-06 11:04:40.641473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.641986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.642978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.643985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.644990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.645989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.646989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.721 [2024-12-06 11:04:40.647712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.647742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.647772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.647806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.647837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.647877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.647910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.647938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.647968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.648964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.649997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.650984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.651975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.652991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.653841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.722 [2024-12-06 11:04:40.654569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.654983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.655973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.656987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.657967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.723 [2024-12-06 11:04:40.658889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.658916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.658941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.658968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.658994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.659769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.660997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.661980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.662010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.662039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.662072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.724 [2024-12-06 11:04:40.662100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.662976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.663999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.664983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.665016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.665049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.665080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.725 [2024-12-06 11:04:40.665111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.665996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.666985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.667839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.726 [2024-12-06 11:04:40.668858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.668894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.668923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.668954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.668984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 true 00:05:34.727 [2024-12-06 11:04:40.669575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.669984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.670991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.671970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.672005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.672033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.672063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.672093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.727 [2024-12-06 11:04:40.672123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.728 [2024-12-06 11:04:40.672156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.728 [2024-12-06 11:04:40.672186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.728 [2024-12-06 11:04:40.672551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:34.728 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:34.728 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.670 11:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.931 11:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:35.931 11:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:35.931 true 00:05:35.931 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:35.931 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.192 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.454 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:36.454 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:36.454 true 00:05:36.454 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:36.454 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.716 Initializing NVMe Controllers 00:05:36.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:36.716 Controller IO queue size 128, less than required. 00:05:36.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:36.716 Controller IO queue size 128, less than required. 00:05:36.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:36.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:36.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:36.716 Initialization complete. Launching workers. 00:05:36.716 ======================================================== 00:05:36.716 Latency(us) 00:05:36.716 Device Information : IOPS MiB/s Average min max 00:05:36.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2183.68 1.07 26660.53 1515.62 1046499.28 00:05:36.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13351.20 6.52 9555.37 1442.82 404850.65 00:05:36.716 ======================================================== 00:05:36.716 Total : 15534.88 7.59 11959.78 1442.82 1046499.28 00:05:36.716 00:05:36.716 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.978 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:36.978 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:36.978 true 00:05:37.238 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3195613 00:05:37.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3195613) - No such process 00:05:37.238 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3195613 00:05:37.238 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.238 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:37.499 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:37.499 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:37.499 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:37.499 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:37.499 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:37.759 null0 00:05:37.759 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:37.759 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:37.759 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:37.759 null1 00:05:37.759 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:37.759 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:37.759 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:38.020 null2 00:05:38.020 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.020 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.020 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:38.281 null3 00:05:38.281 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.281 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.281 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:38.281 null4 00:05:38.281 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.281 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.281 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:38.541 null5 00:05:38.541 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.541 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.542 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:38.802 null6 00:05:38.802 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.802 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.802 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:38.802 null7 00:05:39.070 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.070 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.070 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:39.070 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.071 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3202296 3202297 3202299 3202301 3202303 3202305 3202307 3202309 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.072 11:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:39.072 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.072 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.072 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:39.072 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:39.072 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:39.333 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.334 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.595 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:39.858 11:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.120 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.382 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.643 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.903 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.903 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.903 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.903 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.903 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.903 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.904 11:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.904 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.904 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.165 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.166 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.166 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.166 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.166 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.426 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.426 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.426 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.426 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.426 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.426 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.426 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.427 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.427 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.427 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.427 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.427 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.427 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.427 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.427 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.427 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.707 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.707 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.707 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.707 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.708 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.968 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.968 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.968 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.968 11:04:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.968 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.228 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.229 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.492 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:42.754 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:42.754 rmmod nvme_tcp 00:05:43.015 rmmod nvme_fabrics 00:05:43.015 rmmod nvme_keyring 00:05:43.015 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:43.015 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:43.015 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:43.015 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3195083 ']' 00:05:43.015 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3195083 00:05:43.015 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3195083 ']' 00:05:43.015 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3195083 00:05:43.015 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:43.015 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.015 11:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3195083 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3195083' 00:05:43.015 killing process with pid 3195083 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3195083 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3195083 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:43.015 11:04:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:45.567 00:05:45.567 real 0m50.193s 00:05:45.567 user 3m15.859s 00:05:45.567 sys 0m16.804s 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:45.567 ************************************ 00:05:45.567 END TEST nvmf_ns_hotplug_stress 00:05:45.567 ************************************ 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:45.567 ************************************ 00:05:45.567 START TEST nvmf_delete_subsystem 00:05:45.567 ************************************ 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:45.567 * Looking for test storage... 00:05:45.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.567 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.568 --rc genhtml_branch_coverage=1 00:05:45.568 --rc genhtml_function_coverage=1 00:05:45.568 --rc genhtml_legend=1 00:05:45.568 --rc geninfo_all_blocks=1 00:05:45.568 --rc geninfo_unexecuted_blocks=1 00:05:45.568 00:05:45.568 ' 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.568 --rc genhtml_branch_coverage=1 00:05:45.568 --rc genhtml_function_coverage=1 00:05:45.568 --rc genhtml_legend=1 00:05:45.568 --rc geninfo_all_blocks=1 00:05:45.568 --rc geninfo_unexecuted_blocks=1 00:05:45.568 00:05:45.568 ' 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.568 --rc genhtml_branch_coverage=1 00:05:45.568 --rc genhtml_function_coverage=1 00:05:45.568 --rc genhtml_legend=1 00:05:45.568 --rc geninfo_all_blocks=1 00:05:45.568 --rc geninfo_unexecuted_blocks=1 00:05:45.568 00:05:45.568 ' 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.568 --rc genhtml_branch_coverage=1 00:05:45.568 --rc genhtml_function_coverage=1 00:05:45.568 --rc genhtml_legend=1 00:05:45.568 --rc geninfo_all_blocks=1 00:05:45.568 --rc geninfo_unexecuted_blocks=1 00:05:45.568 00:05:45.568 ' 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:45.568 11:04:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:53.804 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:53.804 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:53.804 Found net devices under 0000:31:00.0: cvl_0_0 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:53.804 Found net devices under 0000:31:00.1: cvl_0_1 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:53.804 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:53.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:53.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:05:53.805 00:05:53.805 --- 10.0.0.2 ping statistics --- 00:05:53.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.805 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:53.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:53.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:05:53.805 00:05:53.805 --- 10.0.0.1 ping statistics --- 00:05:53.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.805 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3207864 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3207864 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3207864 ']' 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.805 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.805 [2024-12-06 11:04:59.730183] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:53.805 [2024-12-06 11:04:59.730248] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.805 [2024-12-06 11:04:59.821486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.805 [2024-12-06 11:04:59.857406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:53.805 [2024-12-06 11:04:59.857442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:53.805 [2024-12-06 11:04:59.857450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.805 [2024-12-06 11:04:59.857457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.805 [2024-12-06 11:04:59.857462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:53.805 [2024-12-06 11:04:59.858733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.805 [2024-12-06 11:04:59.858733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.377 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.377 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:54.377 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:54.377 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.377 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.638 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:54.638 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 [2024-12-06 11:05:00.560761] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 [2024-12-06 11:05:00.576968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 NULL1 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 Delay0 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3208190 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:54.639 11:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:54.639 [2024-12-06 11:05:00.661775] subsystem.c:1791:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:56.548 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:56.549 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.549 11:05:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 [2024-12-06 11:05:02.706525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9d2c0 is same with the state(6) to be set 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 starting I/O failed: -6 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 [2024-12-06 11:05:02.709696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f969000d350 is same with the state(6) to be set 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Write completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.549 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Write completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Write completed with error (sct=0, sc=8) 00:05:56.550 Write completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Write completed with error (sct=0, sc=8) 00:05:56.550 Write completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:56.550 Read completed with error (sct=0, sc=8) 00:05:57.937 [2024-12-06 11:05:03.677602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e5f0 is same with the state(6) to be set 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 [2024-12-06 11:05:03.710514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9d0e0 is same with the state(6) to be set 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 [2024-12-06 11:05:03.710709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9d4a0 is same with the state(6) to be set 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 [2024-12-06 11:05:03.712218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f969000d020 is same with the state(6) to be set 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Write completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 Read completed with error (sct=0, sc=8) 00:05:57.938 [2024-12-06 11:05:03.712555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f969000d680 is same with the state(6) to be set 00:05:57.938 Initializing NVMe Controllers 00:05:57.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:57.938 Controller IO queue size 128, less than required. 00:05:57.938 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:57.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:57.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:57.938 Initialization complete. Launching workers. 00:05:57.938 ======================================================== 00:05:57.938 Latency(us) 00:05:57.938 Device Information : IOPS MiB/s Average min max 00:05:57.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.23 0.08 887953.78 228.46 1007830.83 00:05:57.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.27 0.08 931156.87 303.95 2001561.49 00:05:57.938 ======================================================== 00:05:57.938 Total : 335.49 0.16 908978.43 228.46 2001561.49 00:05:57.938 00:05:57.938 [2024-12-06 11:05:03.713136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e5f0 (9): Bad file descriptor 00:05:57.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:57.938 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.938 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:57.938 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3208190 00:05:57.938 11:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3208190 00:05:58.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3208190) - No such process 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3208190 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3208190 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3208190 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:58.200 [2024-12-06 11:05:04.244218] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3208876 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3208876 00:05:58.200 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:58.200 [2024-12-06 11:05:04.322392] subsystem.c:1791:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:58.771 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.771 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3208876 00:05:58.771 11:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.341 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.341 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3208876 00:05:59.341 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.910 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.910 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3208876 00:05:59.910 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:00.170 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:00.170 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3208876 00:06:00.170 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:00.742 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:00.742 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3208876 00:06:00.742 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:01.313 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.313 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3208876 00:06:01.313 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:01.313 Initializing NVMe Controllers 00:06:01.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:01.313 Controller IO queue size 128, less than required. 00:06:01.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:01.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:01.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:01.313 Initialization complete. Launching workers. 00:06:01.313 ======================================================== 00:06:01.313 Latency(us) 00:06:01.313 Device Information : IOPS MiB/s Average min max 00:06:01.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002040.88 1000166.84 1006914.04 00:06:01.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002882.89 1000159.88 1009712.14 00:06:01.313 ======================================================== 00:06:01.313 Total : 256.00 0.12 1002461.88 1000159.88 1009712.14 00:06:01.313 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3208876 00:06:01.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3208876) - No such process 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3208876 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:01.883 rmmod nvme_tcp 00:06:01.883 rmmod nvme_fabrics 00:06:01.883 rmmod nvme_keyring 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3207864 ']' 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3207864 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3207864 ']' 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3207864 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3207864 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3207864' 00:06:01.883 killing process with pid 3207864 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3207864 00:06:01.883 11:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3207864 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:02.143 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.055 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:04.055 00:06:04.055 real 0m18.848s 00:06:04.055 user 0m30.458s 00:06:04.055 sys 0m7.272s 00:06:04.055 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.055 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.055 ************************************ 00:06:04.055 END TEST nvmf_delete_subsystem 00:06:04.055 ************************************ 00:06:04.055 11:05:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:04.055 11:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:04.055 11:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.055 11:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:04.316 ************************************ 00:06:04.316 START TEST nvmf_host_management 00:06:04.316 ************************************ 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:04.316 * Looking for test storage... 00:06:04.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:04.316 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.317 --rc genhtml_branch_coverage=1 00:06:04.317 --rc genhtml_function_coverage=1 00:06:04.317 --rc genhtml_legend=1 00:06:04.317 --rc geninfo_all_blocks=1 00:06:04.317 --rc geninfo_unexecuted_blocks=1 00:06:04.317 00:06:04.317 ' 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.317 --rc genhtml_branch_coverage=1 00:06:04.317 --rc genhtml_function_coverage=1 00:06:04.317 --rc genhtml_legend=1 00:06:04.317 --rc geninfo_all_blocks=1 00:06:04.317 --rc geninfo_unexecuted_blocks=1 00:06:04.317 00:06:04.317 ' 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.317 --rc genhtml_branch_coverage=1 00:06:04.317 --rc genhtml_function_coverage=1 00:06:04.317 --rc genhtml_legend=1 00:06:04.317 --rc geninfo_all_blocks=1 00:06:04.317 --rc geninfo_unexecuted_blocks=1 00:06:04.317 00:06:04.317 ' 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.317 --rc genhtml_branch_coverage=1 00:06:04.317 --rc genhtml_function_coverage=1 00:06:04.317 --rc genhtml_legend=1 00:06:04.317 --rc geninfo_all_blocks=1 00:06:04.317 --rc geninfo_unexecuted_blocks=1 00:06:04.317 00:06:04.317 ' 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.317 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:04.318 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:04.318 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:04.318 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:04.318 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:04.318 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:04.318 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:04.318 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:04.318 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.318 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.318 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.578 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:04.578 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:04.578 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:04.578 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:12.713 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:12.714 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:12.714 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:12.714 Found net devices under 0000:31:00.0: cvl_0_0 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:12.714 Found net devices under 0000:31:00.1: cvl_0_1 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:12.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:12.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:06:12.714 00:06:12.714 --- 10.0.0.2 ping statistics --- 00:06:12.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.714 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:12.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:12.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:06:12.714 00:06:12.714 --- 10.0.0.1 ping statistics --- 00:06:12.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.714 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3214558 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3214558 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3214558 ']' 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.714 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:12.976 [2024-12-06 11:05:18.907403] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:12.976 [2024-12-06 11:05:18.907470] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:12.976 [2024-12-06 11:05:19.020299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.976 [2024-12-06 11:05:19.073019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:12.976 [2024-12-06 11:05:19.073076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:12.976 [2024-12-06 11:05:19.073085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.976 [2024-12-06 11:05:19.073092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.976 [2024-12-06 11:05:19.073098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:12.976 [2024-12-06 11:05:19.075493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.976 [2024-12-06 11:05:19.075658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.976 [2024-12-06 11:05:19.075823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.976 [2024-12-06 11:05:19.075823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.916 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.916 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:13.916 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.917 [2024-12-06 11:05:19.773007] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.917 Malloc0 00:06:13.917 [2024-12-06 11:05:19.852238] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3214636 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3214636 /var/tmp/bdevperf.sock 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3214636 ']' 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:13.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:13.917 { 00:06:13.917 "params": { 00:06:13.917 "name": "Nvme$subsystem", 00:06:13.917 "trtype": "$TEST_TRANSPORT", 00:06:13.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:13.917 "adrfam": "ipv4", 00:06:13.917 "trsvcid": "$NVMF_PORT", 00:06:13.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:13.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:13.917 "hdgst": ${hdgst:-false}, 00:06:13.917 "ddgst": ${ddgst:-false} 00:06:13.917 }, 00:06:13.917 "method": "bdev_nvme_attach_controller" 00:06:13.917 } 00:06:13.917 EOF 00:06:13.917 )") 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:13.917 11:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:13.917 "params": { 00:06:13.917 "name": "Nvme0", 00:06:13.917 "trtype": "tcp", 00:06:13.917 "traddr": "10.0.0.2", 00:06:13.917 "adrfam": "ipv4", 00:06:13.917 "trsvcid": "4420", 00:06:13.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:13.917 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:13.917 "hdgst": false, 00:06:13.917 "ddgst": false 00:06:13.917 }, 00:06:13.917 "method": "bdev_nvme_attach_controller" 00:06:13.917 }' 00:06:13.917 [2024-12-06 11:05:19.957140] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:13.917 [2024-12-06 11:05:19.957207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3214636 ] 00:06:13.917 [2024-12-06 11:05:20.044353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.917 [2024-12-06 11:05:20.082010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.178 Running I/O for 10 seconds... 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=847 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 847 -ge 100 ']' 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.751 [2024-12-06 11:05:20.831357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.831502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491e80 is same with the state(6) to be set 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.751 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.751 [2024-12-06 11:05:20.845218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:14.751 [2024-12-06 11:05:20.845255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.751 [2024-12-06 11:05:20.845265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:14.751 [2024-12-06 11:05:20.845273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.751 [2024-12-06 11:05:20.845282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:14.751 [2024-12-06 11:05:20.845289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.751 [2024-12-06 11:05:20.845297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:14.751 [2024-12-06 11:05:20.845304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.751 [2024-12-06 11:05:20.845312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x588b10 is same with the state(6) to be set 00:06:14.751 [2024-12-06 11:05:20.846503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.846985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.846992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.847001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.847009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.847019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.847026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.847035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.752 [2024-12-06 11:05:20.847044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.752 [2024-12-06 11:05:20.847054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.753 [2024-12-06 11:05:20.847551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.753 [2024-12-06 11:05:20.847558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.754 [2024-12-06 11:05:20.847568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.754 [2024-12-06 11:05:20.847575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.754 [2024-12-06 11:05:20.847585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.754 [2024-12-06 11:05:20.847593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.754 [2024-12-06 11:05:20.847602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.754 [2024-12-06 11:05:20.847610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.754 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.754 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:14.754 [2024-12-06 11:05:20.848832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:14.754 task offset: 122880 on job bdev=Nvme0n1 fails 00:06:14.754 00:06:14.754 Latency(us) 00:06:14.754 [2024-12-06T10:05:20.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:14.754 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:14.754 Job: Nvme0n1 ended in about 0.62 seconds with error 00:06:14.754 Verification LBA range: start 0x0 length 0x400 00:06:14.754 Nvme0n1 : 0.62 1551.36 96.96 103.42 0.00 37788.02 1556.48 34297.17 00:06:14.754 [2024-12-06T10:05:20.921Z] =================================================================================================================== 00:06:14.754 [2024-12-06T10:05:20.921Z] Total : 1551.36 96.96 103.42 0.00 37788.02 1556.48 34297.17 00:06:14.754 [2024-12-06 11:05:20.850834] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.754 [2024-12-06 11:05:20.850867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x588b10 (9): Bad file descriptor 00:06:15.015 [2024-12-06 11:05:20.985109] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3214636 00:06:15.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3214636) - No such process 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:15.959 { 00:06:15.959 "params": { 00:06:15.959 "name": "Nvme$subsystem", 00:06:15.959 "trtype": "$TEST_TRANSPORT", 00:06:15.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:15.959 "adrfam": "ipv4", 00:06:15.959 "trsvcid": "$NVMF_PORT", 00:06:15.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:15.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:15.959 "hdgst": ${hdgst:-false}, 00:06:15.959 "ddgst": ${ddgst:-false} 00:06:15.959 }, 00:06:15.959 "method": "bdev_nvme_attach_controller" 00:06:15.959 } 00:06:15.959 EOF 00:06:15.959 )") 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:15.959 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:15.959 "params": { 00:06:15.959 "name": "Nvme0", 00:06:15.959 "trtype": "tcp", 00:06:15.959 "traddr": "10.0.0.2", 00:06:15.959 "adrfam": "ipv4", 00:06:15.959 "trsvcid": "4420", 00:06:15.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:15.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:15.959 "hdgst": false, 00:06:15.959 "ddgst": false 00:06:15.959 }, 00:06:15.959 "method": "bdev_nvme_attach_controller" 00:06:15.959 }' 00:06:15.959 [2024-12-06 11:05:21.915121] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:15.959 [2024-12-06 11:05:21.915177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215102 ] 00:06:15.959 [2024-12-06 11:05:21.994019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.959 [2024-12-06 11:05:22.029316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.220 Running I/O for 1 seconds... 00:06:17.605 1598.00 IOPS, 99.88 MiB/s 00:06:17.605 Latency(us) 00:06:17.605 [2024-12-06T10:05:23.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:17.605 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:17.605 Verification LBA range: start 0x0 length 0x400 00:06:17.605 Nvme0n1 : 1.04 1601.69 100.11 0.00 0.00 39266.68 6034.77 33641.81 00:06:17.605 [2024-12-06T10:05:23.772Z] =================================================================================================================== 00:06:17.606 [2024-12-06T10:05:23.773Z] Total : 1601.69 100.11 0.00 0.00 39266.68 6034.77 33641.81 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:17.606 rmmod nvme_tcp 00:06:17.606 rmmod nvme_fabrics 00:06:17.606 rmmod nvme_keyring 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3214558 ']' 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3214558 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3214558 ']' 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3214558 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3214558 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3214558' 00:06:17.606 killing process with pid 3214558 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3214558 00:06:17.606 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3214558 00:06:17.606 [2024-12-06 11:05:23.760256] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.867 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.778 11:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:19.778 11:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:19.778 00:06:19.778 real 0m15.630s 00:06:19.778 user 0m23.930s 00:06:19.778 sys 0m7.404s 00:06:19.778 11:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.778 11:05:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.778 ************************************ 00:06:19.778 END TEST nvmf_host_management 00:06:19.778 ************************************ 00:06:19.778 11:05:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:19.778 11:05:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.778 11:05:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.778 11:05:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:20.040 ************************************ 00:06:20.040 START TEST nvmf_lvol 00:06:20.040 ************************************ 00:06:20.040 11:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:20.040 * Looking for test storage... 00:06:20.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.040 --rc genhtml_branch_coverage=1 00:06:20.040 --rc genhtml_function_coverage=1 00:06:20.040 --rc genhtml_legend=1 00:06:20.040 --rc geninfo_all_blocks=1 00:06:20.040 --rc geninfo_unexecuted_blocks=1 00:06:20.040 00:06:20.040 ' 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.040 --rc genhtml_branch_coverage=1 00:06:20.040 --rc genhtml_function_coverage=1 00:06:20.040 --rc genhtml_legend=1 00:06:20.040 --rc geninfo_all_blocks=1 00:06:20.040 --rc geninfo_unexecuted_blocks=1 00:06:20.040 00:06:20.040 ' 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.040 --rc genhtml_branch_coverage=1 00:06:20.040 --rc genhtml_function_coverage=1 00:06:20.040 --rc genhtml_legend=1 00:06:20.040 --rc geninfo_all_blocks=1 00:06:20.040 --rc geninfo_unexecuted_blocks=1 00:06:20.040 00:06:20.040 ' 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.040 --rc genhtml_branch_coverage=1 00:06:20.040 --rc genhtml_function_coverage=1 00:06:20.040 --rc genhtml_legend=1 00:06:20.040 --rc geninfo_all_blocks=1 00:06:20.040 --rc geninfo_unexecuted_blocks=1 00:06:20.040 00:06:20.040 ' 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.040 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:20.041 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:28.186 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:28.187 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:28.187 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:28.187 Found net devices under 0000:31:00.0: cvl_0_0 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:28.187 Found net devices under 0000:31:00.1: cvl_0_1 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.187 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:28.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:06:28.449 00:06:28.449 --- 10.0.0.2 ping statistics --- 00:06:28.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.449 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:06:28.449 00:06:28.449 --- 10.0.0.1 ping statistics --- 00:06:28.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.449 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:28.449 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3220354 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3220354 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3220354 ']' 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.711 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:28.711 [2024-12-06 11:05:34.684712] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:28.711 [2024-12-06 11:05:34.684779] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.711 [2024-12-06 11:05:34.775495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.711 [2024-12-06 11:05:34.816771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:28.711 [2024-12-06 11:05:34.816805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:28.711 [2024-12-06 11:05:34.816814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.711 [2024-12-06 11:05:34.816821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.711 [2024-12-06 11:05:34.816827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:28.711 [2024-12-06 11:05:34.818250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.711 [2024-12-06 11:05:34.818366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.711 [2024-12-06 11:05:34.818369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.653 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.653 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:29.653 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:29.653 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.653 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.653 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.653 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:29.653 [2024-12-06 11:05:35.690582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.653 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:29.913 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:29.913 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:30.174 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:30.174 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:30.174 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:30.434 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d49511b8-884a-4ed6-89b0-bb1f93d4e230 00:06:30.434 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d49511b8-884a-4ed6-89b0-bb1f93d4e230 lvol 20 00:06:30.695 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c2c79323-6066-47e6-9b87-742812eefcc3 00:06:30.695 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:30.955 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c2c79323-6066-47e6-9b87-742812eefcc3 00:06:30.955 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:31.215 [2024-12-06 11:05:37.273178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.215 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.475 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3220908 00:06:31.475 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:31.475 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:32.416 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c2c79323-6066-47e6-9b87-742812eefcc3 MY_SNAPSHOT 00:06:32.690 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=69302ef0-3826-4a8e-95cf-b75d8f60f09d 00:06:32.690 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c2c79323-6066-47e6-9b87-742812eefcc3 30 00:06:32.950 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 69302ef0-3826-4a8e-95cf-b75d8f60f09d MY_CLONE 00:06:33.210 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1616922d-c3ce-48ed-ab5d-1a9474c4f955 00:06:33.211 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1616922d-c3ce-48ed-ab5d-1a9474c4f955 00:06:33.471 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3220908 00:06:43.470 Initializing NVMe Controllers 00:06:43.470 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:43.470 Controller IO queue size 128, less than required. 00:06:43.470 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:43.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:43.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:43.470 Initialization complete. Launching workers. 00:06:43.470 ======================================================== 00:06:43.470 Latency(us) 00:06:43.470 Device Information : IOPS MiB/s Average min max 00:06:43.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12298.70 48.04 10407.51 1518.65 39803.63 00:06:43.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17878.30 69.84 7160.33 423.32 54932.62 00:06:43.470 ======================================================== 00:06:43.470 Total : 30177.00 117.88 8483.72 423.32 54932.62 00:06:43.470 00:06:43.470 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c2c79323-6066-47e6-9b87-742812eefcc3 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d49511b8-884a-4ed6-89b0-bb1f93d4e230 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:43.470 rmmod nvme_tcp 00:06:43.470 rmmod nvme_fabrics 00:06:43.470 rmmod nvme_keyring 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3220354 ']' 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3220354 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3220354 ']' 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3220354 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3220354 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3220354' 00:06:43.470 killing process with pid 3220354 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3220354 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3220354 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.470 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.856 11:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:44.856 00:06:44.856 real 0m24.840s 00:06:44.856 user 1m4.996s 00:06:44.856 sys 0m9.231s 00:06:44.856 11:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.856 11:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.856 ************************************ 00:06:44.856 END TEST nvmf_lvol 00:06:44.856 ************************************ 00:06:44.856 11:05:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:44.856 11:05:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.856 11:05:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.856 11:05:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.856 ************************************ 00:06:44.856 START TEST nvmf_lvs_grow 00:06:44.856 ************************************ 00:06:44.856 11:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:44.856 * Looking for test storage... 00:06:44.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.856 11:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.856 11:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.856 11:05:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:45.118 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.119 --rc genhtml_branch_coverage=1 00:06:45.119 --rc genhtml_function_coverage=1 00:06:45.119 --rc genhtml_legend=1 00:06:45.119 --rc geninfo_all_blocks=1 00:06:45.119 --rc geninfo_unexecuted_blocks=1 00:06:45.119 00:06:45.119 ' 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.119 --rc genhtml_branch_coverage=1 00:06:45.119 --rc genhtml_function_coverage=1 00:06:45.119 --rc genhtml_legend=1 00:06:45.119 --rc geninfo_all_blocks=1 00:06:45.119 --rc geninfo_unexecuted_blocks=1 00:06:45.119 00:06:45.119 ' 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.119 --rc genhtml_branch_coverage=1 00:06:45.119 --rc genhtml_function_coverage=1 00:06:45.119 --rc genhtml_legend=1 00:06:45.119 --rc geninfo_all_blocks=1 00:06:45.119 --rc geninfo_unexecuted_blocks=1 00:06:45.119 00:06:45.119 ' 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.119 --rc genhtml_branch_coverage=1 00:06:45.119 --rc genhtml_function_coverage=1 00:06:45.119 --rc genhtml_legend=1 00:06:45.119 --rc geninfo_all_blocks=1 00:06:45.119 --rc geninfo_unexecuted_blocks=1 00:06:45.119 00:06:45.119 ' 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:45.119 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:53.265 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:53.266 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:53.266 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:53.266 Found net devices under 0000:31:00.0: cvl_0_0 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:53.266 Found net devices under 0000:31:00.1: cvl_0_1 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.266 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:53.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:06:53.526 00:06:53.526 --- 10.0.0.2 ping statistics --- 00:06:53.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.526 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:06:53.526 00:06:53.526 --- 10.0.0.1 ping statistics --- 00:06:53.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.526 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:53.526 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:53.527 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.527 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:53.527 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:53.527 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:53.527 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:53.527 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.527 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.527 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3227943 00:06:53.786 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3227943 00:06:53.786 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:53.787 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3227943 ']' 00:06:53.787 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.787 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.787 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.787 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.787 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 [2024-12-06 11:05:59.758860] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:53.787 [2024-12-06 11:05:59.758922] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.787 [2024-12-06 11:05:59.842073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.787 [2024-12-06 11:05:59.876929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.787 [2024-12-06 11:05:59.876961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.787 [2024-12-06 11:05:59.876969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.787 [2024-12-06 11:05:59.876976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.787 [2024-12-06 11:05:59.876982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.787 [2024-12-06 11:05:59.877559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.358 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.358 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:54.358 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:54.358 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.358 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:54.619 [2024-12-06 11:06:00.711078] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:54.619 ************************************ 00:06:54.619 START TEST lvs_grow_clean 00:06:54.619 ************************************ 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:54.619 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:54.880 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:54.880 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:54.880 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:55.141 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:06:55.141 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:06:55.141 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:55.403 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:55.403 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:55.403 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a lvol 150 00:06:55.403 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=42462016-fa8e-4d67-b59d-5a4c3f552438 00:06:55.403 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:55.403 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:55.673 [2024-12-06 11:06:01.653067] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:55.673 [2024-12-06 11:06:01.653121] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:55.673 true 00:06:55.673 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:06:55.673 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:55.673 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:55.673 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:55.934 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 42462016-fa8e-4d67-b59d-5a4c3f552438 00:06:56.194 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:56.194 [2024-12-06 11:06:02.299054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.194 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:56.455 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3228609 00:06:56.455 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:56.455 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:56.455 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3228609 /var/tmp/bdevperf.sock 00:06:56.455 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3228609 ']' 00:06:56.455 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:56.455 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.455 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:56.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:56.455 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.455 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:56.455 [2024-12-06 11:06:02.544644] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:56.455 [2024-12-06 11:06:02.544696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3228609 ] 00:06:56.716 [2024-12-06 11:06:02.640051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.716 [2024-12-06 11:06:02.676011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.352 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.352 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:57.353 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:57.670 Nvme0n1 00:06:57.670 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:57.670 [ 00:06:57.670 { 00:06:57.670 "name": "Nvme0n1", 00:06:57.670 "aliases": [ 00:06:57.670 "42462016-fa8e-4d67-b59d-5a4c3f552438" 00:06:57.670 ], 00:06:57.670 "product_name": "NVMe disk", 00:06:57.670 "block_size": 4096, 00:06:57.670 "num_blocks": 38912, 00:06:57.670 "uuid": "42462016-fa8e-4d67-b59d-5a4c3f552438", 00:06:57.670 "numa_id": 0, 00:06:57.670 "assigned_rate_limits": { 00:06:57.670 "rw_ios_per_sec": 0, 00:06:57.670 "rw_mbytes_per_sec": 0, 00:06:57.670 "r_mbytes_per_sec": 0, 00:06:57.670 "w_mbytes_per_sec": 0 00:06:57.670 }, 00:06:57.670 "claimed": false, 00:06:57.670 "zoned": false, 00:06:57.670 "supported_io_types": { 00:06:57.670 "read": true, 00:06:57.670 "write": true, 00:06:57.670 "unmap": true, 00:06:57.670 "flush": true, 00:06:57.670 "reset": true, 00:06:57.670 "nvme_admin": true, 00:06:57.670 "nvme_io": true, 00:06:57.670 "nvme_io_md": false, 00:06:57.670 "write_zeroes": true, 00:06:57.670 "zcopy": false, 00:06:57.670 "get_zone_info": false, 00:06:57.670 "zone_management": false, 00:06:57.670 "zone_append": false, 00:06:57.670 "compare": true, 00:06:57.670 "compare_and_write": true, 00:06:57.670 "abort": true, 00:06:57.670 "seek_hole": false, 00:06:57.670 "seek_data": false, 00:06:57.670 "copy": true, 00:06:57.670 "nvme_iov_md": false 00:06:57.670 }, 00:06:57.670 "memory_domains": [ 00:06:57.670 { 00:06:57.670 "dma_device_id": "system", 00:06:57.670 "dma_device_type": 1 00:06:57.670 } 00:06:57.670 ], 00:06:57.670 "driver_specific": { 00:06:57.670 "nvme": [ 00:06:57.671 { 00:06:57.671 "trid": { 00:06:57.671 "trtype": "TCP", 00:06:57.671 "adrfam": "IPv4", 00:06:57.671 "traddr": "10.0.0.2", 00:06:57.671 "trsvcid": "4420", 00:06:57.671 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:57.671 }, 00:06:57.671 "ctrlr_data": { 00:06:57.671 "cntlid": 1, 00:06:57.671 "vendor_id": "0x8086", 00:06:57.671 "model_number": "SPDK bdev Controller", 00:06:57.671 "serial_number": "SPDK0", 00:06:57.671 "firmware_revision": "25.01", 00:06:57.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:57.671 "oacs": { 00:06:57.671 "security": 0, 00:06:57.671 "format": 0, 00:06:57.671 "firmware": 0, 00:06:57.671 "ns_manage": 0 00:06:57.671 }, 00:06:57.671 "multi_ctrlr": true, 00:06:57.671 "ana_reporting": false 00:06:57.671 }, 00:06:57.671 "vs": { 00:06:57.671 "nvme_version": "1.3" 00:06:57.671 }, 00:06:57.671 "ns_data": { 00:06:57.671 "id": 1, 00:06:57.671 "can_share": true 00:06:57.671 } 00:06:57.671 } 00:06:57.671 ], 00:06:57.671 "mp_policy": "active_passive" 00:06:57.671 } 00:06:57.671 } 00:06:57.671 ] 00:06:57.671 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3228947 00:06:57.671 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:57.671 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:57.958 Running I/O for 10 seconds... 00:06:58.897 Latency(us) 00:06:58.897 [2024-12-06T10:06:05.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.898 Nvme0n1 : 1.00 17846.00 69.71 0.00 0.00 0.00 0.00 0.00 00:06:58.898 [2024-12-06T10:06:05.065Z] =================================================================================================================== 00:06:58.898 [2024-12-06T10:06:05.065Z] Total : 17846.00 69.71 0.00 0.00 0.00 0.00 0.00 00:06:58.898 00:06:59.836 11:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:06:59.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.836 Nvme0n1 : 2.00 18001.00 70.32 0.00 0.00 0.00 0.00 0.00 00:06:59.836 [2024-12-06T10:06:06.003Z] =================================================================================================================== 00:06:59.836 [2024-12-06T10:06:06.003Z] Total : 18001.00 70.32 0.00 0.00 0.00 0.00 0.00 00:06:59.836 00:06:59.836 true 00:06:59.836 11:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:06:59.836 11:06:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:00.096 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:00.096 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:00.096 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3228947 00:07:01.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.036 Nvme0n1 : 3.00 18051.33 70.51 0.00 0.00 0.00 0.00 0.00 00:07:01.036 [2024-12-06T10:06:07.203Z] =================================================================================================================== 00:07:01.036 [2024-12-06T10:06:07.203Z] Total : 18051.33 70.51 0.00 0.00 0.00 0.00 0.00 00:07:01.036 00:07:02.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.028 Nvme0n1 : 4.00 18086.25 70.65 0.00 0.00 0.00 0.00 0.00 00:07:02.028 [2024-12-06T10:06:08.195Z] =================================================================================================================== 00:07:02.028 [2024-12-06T10:06:08.195Z] Total : 18086.25 70.65 0.00 0.00 0.00 0.00 0.00 00:07:02.028 00:07:02.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.968 Nvme0n1 : 5.00 18125.60 70.80 0.00 0.00 0.00 0.00 0.00 00:07:02.968 [2024-12-06T10:06:09.135Z] =================================================================================================================== 00:07:02.968 [2024-12-06T10:06:09.135Z] Total : 18125.60 70.80 0.00 0.00 0.00 0.00 0.00 00:07:02.968 00:07:03.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.908 Nvme0n1 : 6.00 18139.67 70.86 0.00 0.00 0.00 0.00 0.00 00:07:03.908 [2024-12-06T10:06:10.075Z] =================================================================================================================== 00:07:03.908 [2024-12-06T10:06:10.075Z] Total : 18139.67 70.86 0.00 0.00 0.00 0.00 0.00 00:07:03.908 00:07:04.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.845 Nvme0n1 : 7.00 18165.86 70.96 0.00 0.00 0.00 0.00 0.00 00:07:04.845 [2024-12-06T10:06:11.012Z] =================================================================================================================== 00:07:04.845 [2024-12-06T10:06:11.012Z] Total : 18165.86 70.96 0.00 0.00 0.00 0.00 0.00 00:07:04.845 00:07:05.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.784 Nvme0n1 : 8.00 18178.00 71.01 0.00 0.00 0.00 0.00 0.00 00:07:05.784 [2024-12-06T10:06:11.951Z] =================================================================================================================== 00:07:05.784 [2024-12-06T10:06:11.951Z] Total : 18178.00 71.01 0.00 0.00 0.00 0.00 0.00 00:07:05.784 00:07:06.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.723 Nvme0n1 : 9.00 18196.56 71.08 0.00 0.00 0.00 0.00 0.00 00:07:06.723 [2024-12-06T10:06:12.890Z] =================================================================================================================== 00:07:06.723 [2024-12-06T10:06:12.890Z] Total : 18196.56 71.08 0.00 0.00 0.00 0.00 0.00 00:07:06.723 00:07:08.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.120 Nvme0n1 : 10.00 18201.40 71.10 0.00 0.00 0.00 0.00 0.00 00:07:08.120 [2024-12-06T10:06:14.287Z] =================================================================================================================== 00:07:08.121 [2024-12-06T10:06:14.288Z] Total : 18201.40 71.10 0.00 0.00 0.00 0.00 0.00 00:07:08.121 00:07:08.121 00:07:08.121 Latency(us) 00:07:08.121 [2024-12-06T10:06:14.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.121 Nvme0n1 : 10.01 18204.48 71.11 0.00 0.00 7027.61 4232.53 15947.09 00:07:08.121 [2024-12-06T10:06:14.288Z] =================================================================================================================== 00:07:08.121 [2024-12-06T10:06:14.288Z] Total : 18204.48 71.11 0.00 0.00 7027.61 4232.53 15947.09 00:07:08.121 { 00:07:08.121 "results": [ 00:07:08.121 { 00:07:08.121 "job": "Nvme0n1", 00:07:08.121 "core_mask": "0x2", 00:07:08.121 "workload": "randwrite", 00:07:08.121 "status": "finished", 00:07:08.121 "queue_depth": 128, 00:07:08.121 "io_size": 4096, 00:07:08.121 "runtime": 10.00534, 00:07:08.121 "iops": 18204.47880831636, 00:07:08.121 "mibps": 71.11124534498578, 00:07:08.121 "io_failed": 0, 00:07:08.121 "io_timeout": 0, 00:07:08.121 "avg_latency_us": 7027.609683580211, 00:07:08.121 "min_latency_us": 4232.533333333334, 00:07:08.121 "max_latency_us": 15947.093333333334 00:07:08.121 } 00:07:08.121 ], 00:07:08.121 "core_count": 1 00:07:08.121 } 00:07:08.121 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3228609 00:07:08.121 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3228609 ']' 00:07:08.121 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3228609 00:07:08.121 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:08.121 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.121 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3228609 00:07:08.121 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:08.121 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:08.121 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3228609' 00:07:08.121 killing process with pid 3228609 00:07:08.121 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3228609 00:07:08.121 Received shutdown signal, test time was about 10.000000 seconds 00:07:08.121 00:07:08.121 Latency(us) 00:07:08.121 [2024-12-06T10:06:14.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.121 [2024-12-06T10:06:14.288Z] =================================================================================================================== 00:07:08.121 [2024-12-06T10:06:14.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:08.121 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3228609 00:07:08.121 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:08.121 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:08.380 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:07:08.380 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:08.640 [2024-12-06 11:06:14.728394] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:08.640 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:07:08.900 request: 00:07:08.900 { 00:07:08.900 "uuid": "0b2539bd-fa6e-4c25-a922-3a5ff94aff4a", 00:07:08.900 "method": "bdev_lvol_get_lvstores", 00:07:08.900 "req_id": 1 00:07:08.900 } 00:07:08.900 Got JSON-RPC error response 00:07:08.900 response: 00:07:08.900 { 00:07:08.900 "code": -19, 00:07:08.900 "message": "No such device" 00:07:08.900 } 00:07:08.900 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:08.900 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.900 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.900 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.900 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:09.159 aio_bdev 00:07:09.159 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 42462016-fa8e-4d67-b59d-5a4c3f552438 00:07:09.159 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=42462016-fa8e-4d67-b59d-5a4c3f552438 00:07:09.159 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:09.159 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:09.159 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:09.159 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:09.159 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:09.159 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 42462016-fa8e-4d67-b59d-5a4c3f552438 -t 2000 00:07:09.419 [ 00:07:09.419 { 00:07:09.419 "name": "42462016-fa8e-4d67-b59d-5a4c3f552438", 00:07:09.419 "aliases": [ 00:07:09.419 "lvs/lvol" 00:07:09.419 ], 00:07:09.419 "product_name": "Logical Volume", 00:07:09.419 "block_size": 4096, 00:07:09.419 "num_blocks": 38912, 00:07:09.419 "uuid": "42462016-fa8e-4d67-b59d-5a4c3f552438", 00:07:09.419 "assigned_rate_limits": { 00:07:09.419 "rw_ios_per_sec": 0, 00:07:09.419 "rw_mbytes_per_sec": 0, 00:07:09.419 "r_mbytes_per_sec": 0, 00:07:09.419 "w_mbytes_per_sec": 0 00:07:09.419 }, 00:07:09.419 "claimed": false, 00:07:09.419 "zoned": false, 00:07:09.419 "supported_io_types": { 00:07:09.419 "read": true, 00:07:09.419 "write": true, 00:07:09.419 "unmap": true, 00:07:09.419 "flush": false, 00:07:09.419 "reset": true, 00:07:09.419 "nvme_admin": false, 00:07:09.419 "nvme_io": false, 00:07:09.419 "nvme_io_md": false, 00:07:09.419 "write_zeroes": true, 00:07:09.419 "zcopy": false, 00:07:09.419 "get_zone_info": false, 00:07:09.419 "zone_management": false, 00:07:09.419 "zone_append": false, 00:07:09.419 "compare": false, 00:07:09.419 "compare_and_write": false, 00:07:09.419 "abort": false, 00:07:09.419 "seek_hole": true, 00:07:09.419 "seek_data": true, 00:07:09.419 "copy": false, 00:07:09.419 "nvme_iov_md": false 00:07:09.419 }, 00:07:09.419 "driver_specific": { 00:07:09.419 "lvol": { 00:07:09.419 "lvol_store_uuid": "0b2539bd-fa6e-4c25-a922-3a5ff94aff4a", 00:07:09.419 "base_bdev": "aio_bdev", 00:07:09.419 "thin_provision": false, 00:07:09.419 "num_allocated_clusters": 38, 00:07:09.419 "snapshot": false, 00:07:09.419 "clone": false, 00:07:09.419 "esnap_clone": false 00:07:09.419 } 00:07:09.419 } 00:07:09.419 } 00:07:09.419 ] 00:07:09.419 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:09.419 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:07:09.419 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:09.678 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:09.678 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:07:09.678 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:09.678 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:09.678 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 42462016-fa8e-4d67-b59d-5a4c3f552438 00:07:09.938 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b2539bd-fa6e-4c25-a922-3a5ff94aff4a 00:07:10.198 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:10.198 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.198 00:07:10.198 real 0m15.536s 00:07:10.198 user 0m15.298s 00:07:10.198 sys 0m1.322s 00:07:10.198 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.198 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:10.198 ************************************ 00:07:10.198 END TEST lvs_grow_clean 00:07:10.198 ************************************ 00:07:10.198 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:10.198 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.198 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.198 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.458 ************************************ 00:07:10.458 START TEST lvs_grow_dirty 00:07:10.458 ************************************ 00:07:10.458 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:10.458 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:10.458 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:10.458 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:10.458 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:10.458 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:10.458 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:10.458 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.458 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.458 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:10.717 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:10.717 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:10.717 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:10.717 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:10.717 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:10.976 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:10.976 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:10.976 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 lvol 150 00:07:11.235 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=115c829a-1e21-42b2-a502-9ca3f5409bef 00:07:11.235 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.235 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:11.235 [2024-12-06 11:06:17.307091] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:11.235 [2024-12-06 11:06:17.307144] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:11.235 true 00:07:11.235 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:11.235 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:11.494 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:11.494 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:11.753 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 115c829a-1e21-42b2-a502-9ca3f5409bef 00:07:11.753 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:12.013 [2024-12-06 11:06:17.973285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.013 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.013 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3232174 00:07:12.013 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:12.013 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:12.013 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3232174 /var/tmp/bdevperf.sock 00:07:12.013 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3232174 ']' 00:07:12.013 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:12.013 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.013 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:12.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:12.013 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.013 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:12.274 [2024-12-06 11:06:18.207981] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:12.274 [2024-12-06 11:06:18.208037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3232174 ] 00:07:12.274 [2024-12-06 11:06:18.298948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.274 [2024-12-06 11:06:18.329228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.213 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.213 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:13.213 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:13.213 Nvme0n1 00:07:13.213 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:13.473 [ 00:07:13.473 { 00:07:13.473 "name": "Nvme0n1", 00:07:13.473 "aliases": [ 00:07:13.473 "115c829a-1e21-42b2-a502-9ca3f5409bef" 00:07:13.473 ], 00:07:13.473 "product_name": "NVMe disk", 00:07:13.473 "block_size": 4096, 00:07:13.473 "num_blocks": 38912, 00:07:13.473 "uuid": "115c829a-1e21-42b2-a502-9ca3f5409bef", 00:07:13.473 "numa_id": 0, 00:07:13.473 "assigned_rate_limits": { 00:07:13.473 "rw_ios_per_sec": 0, 00:07:13.473 "rw_mbytes_per_sec": 0, 00:07:13.473 "r_mbytes_per_sec": 0, 00:07:13.473 "w_mbytes_per_sec": 0 00:07:13.473 }, 00:07:13.473 "claimed": false, 00:07:13.473 "zoned": false, 00:07:13.473 "supported_io_types": { 00:07:13.473 "read": true, 00:07:13.473 "write": true, 00:07:13.473 "unmap": true, 00:07:13.473 "flush": true, 00:07:13.473 "reset": true, 00:07:13.473 "nvme_admin": true, 00:07:13.473 "nvme_io": true, 00:07:13.473 "nvme_io_md": false, 00:07:13.473 "write_zeroes": true, 00:07:13.473 "zcopy": false, 00:07:13.473 "get_zone_info": false, 00:07:13.473 "zone_management": false, 00:07:13.473 "zone_append": false, 00:07:13.473 "compare": true, 00:07:13.473 "compare_and_write": true, 00:07:13.473 "abort": true, 00:07:13.473 "seek_hole": false, 00:07:13.473 "seek_data": false, 00:07:13.473 "copy": true, 00:07:13.473 "nvme_iov_md": false 00:07:13.473 }, 00:07:13.473 "memory_domains": [ 00:07:13.473 { 00:07:13.473 "dma_device_id": "system", 00:07:13.473 "dma_device_type": 1 00:07:13.473 } 00:07:13.473 ], 00:07:13.473 "driver_specific": { 00:07:13.473 "nvme": [ 00:07:13.473 { 00:07:13.473 "trid": { 00:07:13.473 "trtype": "TCP", 00:07:13.473 "adrfam": "IPv4", 00:07:13.473 "traddr": "10.0.0.2", 00:07:13.473 "trsvcid": "4420", 00:07:13.473 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:13.473 }, 00:07:13.473 "ctrlr_data": { 00:07:13.473 "cntlid": 1, 00:07:13.473 "vendor_id": "0x8086", 00:07:13.473 "model_number": "SPDK bdev Controller", 00:07:13.473 "serial_number": "SPDK0", 00:07:13.473 "firmware_revision": "25.01", 00:07:13.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:13.473 "oacs": { 00:07:13.473 "security": 0, 00:07:13.473 "format": 0, 00:07:13.473 "firmware": 0, 00:07:13.473 "ns_manage": 0 00:07:13.473 }, 00:07:13.473 "multi_ctrlr": true, 00:07:13.473 "ana_reporting": false 00:07:13.473 }, 00:07:13.473 "vs": { 00:07:13.473 "nvme_version": "1.3" 00:07:13.473 }, 00:07:13.473 "ns_data": { 00:07:13.473 "id": 1, 00:07:13.473 "can_share": true 00:07:13.473 } 00:07:13.473 } 00:07:13.473 ], 00:07:13.473 "mp_policy": "active_passive" 00:07:13.473 } 00:07:13.473 } 00:07:13.473 ] 00:07:13.473 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3232499 00:07:13.473 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:13.473 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:13.473 Running I/O for 10 seconds... 00:07:14.413 Latency(us) 00:07:14.413 [2024-12-06T10:06:20.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.413 Nvme0n1 : 1.00 17860.00 69.77 0.00 0.00 0.00 0.00 0.00 00:07:14.413 [2024-12-06T10:06:20.580Z] =================================================================================================================== 00:07:14.413 [2024-12-06T10:06:20.580Z] Total : 17860.00 69.77 0.00 0.00 0.00 0.00 0.00 00:07:14.413 00:07:15.353 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:15.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.613 Nvme0n1 : 2.00 18003.50 70.33 0.00 0.00 0.00 0.00 0.00 00:07:15.613 [2024-12-06T10:06:21.780Z] =================================================================================================================== 00:07:15.613 [2024-12-06T10:06:21.780Z] Total : 18003.50 70.33 0.00 0.00 0.00 0.00 0.00 00:07:15.613 00:07:15.613 true 00:07:15.613 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:15.613 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:15.872 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:15.872 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:15.872 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3232499 00:07:16.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.442 Nvme0n1 : 3.00 18060.67 70.55 0.00 0.00 0.00 0.00 0.00 00:07:16.442 [2024-12-06T10:06:22.609Z] =================================================================================================================== 00:07:16.442 [2024-12-06T10:06:22.609Z] Total : 18060.67 70.55 0.00 0.00 0.00 0.00 0.00 00:07:16.442 00:07:17.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.382 Nvme0n1 : 4.00 18116.25 70.77 0.00 0.00 0.00 0.00 0.00 00:07:17.382 [2024-12-06T10:06:23.549Z] =================================================================================================================== 00:07:17.382 [2024-12-06T10:06:23.549Z] Total : 18116.25 70.77 0.00 0.00 0.00 0.00 0.00 00:07:17.382 00:07:18.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.764 Nvme0n1 : 5.00 18145.40 70.88 0.00 0.00 0.00 0.00 0.00 00:07:18.764 [2024-12-06T10:06:24.931Z] =================================================================================================================== 00:07:18.764 [2024-12-06T10:06:24.931Z] Total : 18145.40 70.88 0.00 0.00 0.00 0.00 0.00 00:07:18.764 00:07:19.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.702 Nvme0n1 : 6.00 18173.50 70.99 0.00 0.00 0.00 0.00 0.00 00:07:19.702 [2024-12-06T10:06:25.869Z] =================================================================================================================== 00:07:19.702 [2024-12-06T10:06:25.869Z] Total : 18173.50 70.99 0.00 0.00 0.00 0.00 0.00 00:07:19.702 00:07:20.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.640 Nvme0n1 : 7.00 18206.57 71.12 0.00 0.00 0.00 0.00 0.00 00:07:20.640 [2024-12-06T10:06:26.807Z] =================================================================================================================== 00:07:20.640 [2024-12-06T10:06:26.807Z] Total : 18206.57 71.12 0.00 0.00 0.00 0.00 0.00 00:07:20.640 00:07:21.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.578 Nvme0n1 : 8.00 18209.38 71.13 0.00 0.00 0.00 0.00 0.00 00:07:21.578 [2024-12-06T10:06:27.745Z] =================================================================================================================== 00:07:21.578 [2024-12-06T10:06:27.745Z] Total : 18209.38 71.13 0.00 0.00 0.00 0.00 0.00 00:07:21.578 00:07:22.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.515 Nvme0n1 : 9.00 18226.56 71.20 0.00 0.00 0.00 0.00 0.00 00:07:22.515 [2024-12-06T10:06:28.682Z] =================================================================================================================== 00:07:22.515 [2024-12-06T10:06:28.682Z] Total : 18226.56 71.20 0.00 0.00 0.00 0.00 0.00 00:07:22.515 00:07:23.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.454 Nvme0n1 : 10.00 18241.50 71.26 0.00 0.00 0.00 0.00 0.00 00:07:23.454 [2024-12-06T10:06:29.621Z] =================================================================================================================== 00:07:23.454 [2024-12-06T10:06:29.621Z] Total : 18241.50 71.26 0.00 0.00 0.00 0.00 0.00 00:07:23.455 00:07:23.455 00:07:23.455 Latency(us) 00:07:23.455 [2024-12-06T10:06:29.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.455 Nvme0n1 : 10.00 18238.97 71.25 0.00 0.00 7014.61 4259.84 13544.11 00:07:23.455 [2024-12-06T10:06:29.622Z] =================================================================================================================== 00:07:23.455 [2024-12-06T10:06:29.622Z] Total : 18238.97 71.25 0.00 0.00 7014.61 4259.84 13544.11 00:07:23.455 { 00:07:23.455 "results": [ 00:07:23.455 { 00:07:23.455 "job": "Nvme0n1", 00:07:23.455 "core_mask": "0x2", 00:07:23.455 "workload": "randwrite", 00:07:23.455 "status": "finished", 00:07:23.455 "queue_depth": 128, 00:07:23.455 "io_size": 4096, 00:07:23.455 "runtime": 10.004949, 00:07:23.455 "iops": 18238.973531999014, 00:07:23.455 "mibps": 71.24599035937115, 00:07:23.455 "io_failed": 0, 00:07:23.455 "io_timeout": 0, 00:07:23.455 "avg_latency_us": 7014.613015344148, 00:07:23.455 "min_latency_us": 4259.84, 00:07:23.455 "max_latency_us": 13544.106666666667 00:07:23.455 } 00:07:23.455 ], 00:07:23.455 "core_count": 1 00:07:23.455 } 00:07:23.455 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3232174 00:07:23.455 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3232174 ']' 00:07:23.455 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3232174 00:07:23.455 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:23.455 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.455 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3232174 00:07:23.715 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:23.715 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:23.715 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3232174' 00:07:23.715 killing process with pid 3232174 00:07:23.715 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3232174 00:07:23.715 Received shutdown signal, test time was about 10.000000 seconds 00:07:23.715 00:07:23.715 Latency(us) 00:07:23.715 [2024-12-06T10:06:29.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.715 [2024-12-06T10:06:29.882Z] =================================================================================================================== 00:07:23.715 [2024-12-06T10:06:29.882Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:23.715 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3232174 00:07:23.715 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:23.974 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:23.974 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:23.974 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3227943 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3227943 00:07:24.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3227943 Killed "${NVMF_APP[@]}" "$@" 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3234659 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3234659 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3234659 ']' 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.235 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:24.235 [2024-12-06 11:06:30.381738] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:24.235 [2024-12-06 11:06:30.381795] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.495 [2024-12-06 11:06:30.466950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.496 [2024-12-06 11:06:30.503247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.496 [2024-12-06 11:06:30.503276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.496 [2024-12-06 11:06:30.503284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.496 [2024-12-06 11:06:30.503291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.496 [2024-12-06 11:06:30.503297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.496 [2024-12-06 11:06:30.503870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.065 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.065 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:25.065 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.065 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.065 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.065 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.065 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:25.325 [2024-12-06 11:06:31.363590] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:25.325 [2024-12-06 11:06:31.363691] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:25.325 [2024-12-06 11:06:31.363721] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:25.325 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:25.325 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 115c829a-1e21-42b2-a502-9ca3f5409bef 00:07:25.325 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=115c829a-1e21-42b2-a502-9ca3f5409bef 00:07:25.325 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:25.325 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:25.325 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:25.325 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:25.325 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:25.585 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 115c829a-1e21-42b2-a502-9ca3f5409bef -t 2000 00:07:25.585 [ 00:07:25.585 { 00:07:25.585 "name": "115c829a-1e21-42b2-a502-9ca3f5409bef", 00:07:25.585 "aliases": [ 00:07:25.585 "lvs/lvol" 00:07:25.585 ], 00:07:25.585 "product_name": "Logical Volume", 00:07:25.585 "block_size": 4096, 00:07:25.585 "num_blocks": 38912, 00:07:25.585 "uuid": "115c829a-1e21-42b2-a502-9ca3f5409bef", 00:07:25.585 "assigned_rate_limits": { 00:07:25.585 "rw_ios_per_sec": 0, 00:07:25.585 "rw_mbytes_per_sec": 0, 00:07:25.585 "r_mbytes_per_sec": 0, 00:07:25.585 "w_mbytes_per_sec": 0 00:07:25.585 }, 00:07:25.585 "claimed": false, 00:07:25.585 "zoned": false, 00:07:25.585 "supported_io_types": { 00:07:25.585 "read": true, 00:07:25.585 "write": true, 00:07:25.585 "unmap": true, 00:07:25.585 "flush": false, 00:07:25.585 "reset": true, 00:07:25.585 "nvme_admin": false, 00:07:25.585 "nvme_io": false, 00:07:25.585 "nvme_io_md": false, 00:07:25.585 "write_zeroes": true, 00:07:25.585 "zcopy": false, 00:07:25.585 "get_zone_info": false, 00:07:25.585 "zone_management": false, 00:07:25.585 "zone_append": false, 00:07:25.585 "compare": false, 00:07:25.585 "compare_and_write": false, 00:07:25.585 "abort": false, 00:07:25.585 "seek_hole": true, 00:07:25.585 "seek_data": true, 00:07:25.585 "copy": false, 00:07:25.585 "nvme_iov_md": false 00:07:25.585 }, 00:07:25.585 "driver_specific": { 00:07:25.585 "lvol": { 00:07:25.585 "lvol_store_uuid": "4ad5ff29-7a46-4499-ab63-f47745e32ae9", 00:07:25.585 "base_bdev": "aio_bdev", 00:07:25.585 "thin_provision": false, 00:07:25.585 "num_allocated_clusters": 38, 00:07:25.585 "snapshot": false, 00:07:25.585 "clone": false, 00:07:25.585 "esnap_clone": false 00:07:25.585 } 00:07:25.585 } 00:07:25.585 } 00:07:25.585 ] 00:07:25.585 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:25.585 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:25.585 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:25.845 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:25.845 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:25.845 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:26.106 [2024-12-06 11:06:32.195715] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:26.106 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:26.367 request: 00:07:26.367 { 00:07:26.367 "uuid": "4ad5ff29-7a46-4499-ab63-f47745e32ae9", 00:07:26.367 "method": "bdev_lvol_get_lvstores", 00:07:26.367 "req_id": 1 00:07:26.367 } 00:07:26.367 Got JSON-RPC error response 00:07:26.367 response: 00:07:26.367 { 00:07:26.367 "code": -19, 00:07:26.367 "message": "No such device" 00:07:26.367 } 00:07:26.367 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:26.367 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.367 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.367 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.367 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.629 aio_bdev 00:07:26.629 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 115c829a-1e21-42b2-a502-9ca3f5409bef 00:07:26.629 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=115c829a-1e21-42b2-a502-9ca3f5409bef 00:07:26.629 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.629 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:26.629 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.629 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.629 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:26.629 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 115c829a-1e21-42b2-a502-9ca3f5409bef -t 2000 00:07:26.890 [ 00:07:26.890 { 00:07:26.890 "name": "115c829a-1e21-42b2-a502-9ca3f5409bef", 00:07:26.890 "aliases": [ 00:07:26.890 "lvs/lvol" 00:07:26.890 ], 00:07:26.890 "product_name": "Logical Volume", 00:07:26.890 "block_size": 4096, 00:07:26.890 "num_blocks": 38912, 00:07:26.890 "uuid": "115c829a-1e21-42b2-a502-9ca3f5409bef", 00:07:26.890 "assigned_rate_limits": { 00:07:26.890 "rw_ios_per_sec": 0, 00:07:26.890 "rw_mbytes_per_sec": 0, 00:07:26.890 "r_mbytes_per_sec": 0, 00:07:26.890 "w_mbytes_per_sec": 0 00:07:26.890 }, 00:07:26.890 "claimed": false, 00:07:26.890 "zoned": false, 00:07:26.890 "supported_io_types": { 00:07:26.890 "read": true, 00:07:26.890 "write": true, 00:07:26.890 "unmap": true, 00:07:26.890 "flush": false, 00:07:26.890 "reset": true, 00:07:26.890 "nvme_admin": false, 00:07:26.890 "nvme_io": false, 00:07:26.890 "nvme_io_md": false, 00:07:26.890 "write_zeroes": true, 00:07:26.890 "zcopy": false, 00:07:26.890 "get_zone_info": false, 00:07:26.890 "zone_management": false, 00:07:26.890 "zone_append": false, 00:07:26.890 "compare": false, 00:07:26.890 "compare_and_write": false, 00:07:26.890 "abort": false, 00:07:26.890 "seek_hole": true, 00:07:26.890 "seek_data": true, 00:07:26.890 "copy": false, 00:07:26.890 "nvme_iov_md": false 00:07:26.890 }, 00:07:26.890 "driver_specific": { 00:07:26.890 "lvol": { 00:07:26.890 "lvol_store_uuid": "4ad5ff29-7a46-4499-ab63-f47745e32ae9", 00:07:26.890 "base_bdev": "aio_bdev", 00:07:26.890 "thin_provision": false, 00:07:26.890 "num_allocated_clusters": 38, 00:07:26.890 "snapshot": false, 00:07:26.890 "clone": false, 00:07:26.890 "esnap_clone": false 00:07:26.890 } 00:07:26.890 } 00:07:26.890 } 00:07:26.890 ] 00:07:26.890 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:26.890 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:26.890 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:27.151 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:27.151 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:27.151 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:27.151 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:27.151 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 115c829a-1e21-42b2-a502-9ca3f5409bef 00:07:27.411 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ad5ff29-7a46-4499-ab63-f47745e32ae9 00:07:27.671 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:27.671 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.671 00:07:27.671 real 0m17.426s 00:07:27.671 user 0m45.326s 00:07:27.671 sys 0m2.957s 00:07:27.671 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.671 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:27.671 ************************************ 00:07:27.671 END TEST lvs_grow_dirty 00:07:27.671 ************************************ 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:27.931 nvmf_trace.0 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:27.931 11:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:27.931 rmmod nvme_tcp 00:07:27.931 rmmod nvme_fabrics 00:07:27.931 rmmod nvme_keyring 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3234659 ']' 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3234659 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3234659 ']' 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3234659 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3234659 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3234659' 00:07:27.931 killing process with pid 3234659 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3234659 00:07:27.931 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3234659 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.192 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.124 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:30.124 00:07:30.124 real 0m45.410s 00:07:30.124 user 1m7.217s 00:07:30.124 sys 0m11.215s 00:07:30.124 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.124 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.124 ************************************ 00:07:30.124 END TEST nvmf_lvs_grow 00:07:30.124 ************************************ 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.385 ************************************ 00:07:30.385 START TEST nvmf_bdev_io_wait 00:07:30.385 ************************************ 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:30.385 * Looking for test storage... 00:07:30.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:30.385 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.648 --rc genhtml_branch_coverage=1 00:07:30.648 --rc genhtml_function_coverage=1 00:07:30.648 --rc genhtml_legend=1 00:07:30.648 --rc geninfo_all_blocks=1 00:07:30.648 --rc geninfo_unexecuted_blocks=1 00:07:30.648 00:07:30.648 ' 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.648 --rc genhtml_branch_coverage=1 00:07:30.648 --rc genhtml_function_coverage=1 00:07:30.648 --rc genhtml_legend=1 00:07:30.648 --rc geninfo_all_blocks=1 00:07:30.648 --rc geninfo_unexecuted_blocks=1 00:07:30.648 00:07:30.648 ' 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.648 --rc genhtml_branch_coverage=1 00:07:30.648 --rc genhtml_function_coverage=1 00:07:30.648 --rc genhtml_legend=1 00:07:30.648 --rc geninfo_all_blocks=1 00:07:30.648 --rc geninfo_unexecuted_blocks=1 00:07:30.648 00:07:30.648 ' 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.648 --rc genhtml_branch_coverage=1 00:07:30.648 --rc genhtml_function_coverage=1 00:07:30.648 --rc genhtml_legend=1 00:07:30.648 --rc geninfo_all_blocks=1 00:07:30.648 --rc geninfo_unexecuted_blocks=1 00:07:30.648 00:07:30.648 ' 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.648 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.649 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:38.797 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:38.797 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:38.797 Found net devices under 0000:31:00.0: cvl_0_0 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:38.797 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:38.798 Found net devices under 0000:31:00.1: cvl_0_1 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:38.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:07:38.798 00:07:38.798 --- 10.0.0.2 ping statistics --- 00:07:38.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.798 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:07:38.798 00:07:38.798 --- 10.0.0.1 ping statistics --- 00:07:38.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.798 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3240289 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3240289 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3240289 ']' 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.798 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.059 [2024-12-06 11:06:44.996822] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:39.059 [2024-12-06 11:06:44.996879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.059 [2024-12-06 11:06:45.084170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.059 [2024-12-06 11:06:45.121317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.059 [2024-12-06 11:06:45.121353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.059 [2024-12-06 11:06:45.121361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.059 [2024-12-06 11:06:45.121368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.059 [2024-12-06 11:06:45.121374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.059 [2024-12-06 11:06:45.124479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.059 [2024-12-06 11:06:45.124570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.060 [2024-12-06 11:06:45.124778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.060 [2024-12-06 11:06:45.124779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.060 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.060 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:39.060 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.060 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.060 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.321 [2024-12-06 11:06:45.303868] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.321 Malloc0 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.321 [2024-12-06 11:06:45.363109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3240319 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3240321 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.321 { 00:07:39.321 "params": { 00:07:39.321 "name": "Nvme$subsystem", 00:07:39.321 "trtype": "$TEST_TRANSPORT", 00:07:39.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.321 "adrfam": "ipv4", 00:07:39.321 "trsvcid": "$NVMF_PORT", 00:07:39.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.321 "hdgst": ${hdgst:-false}, 00:07:39.321 "ddgst": ${ddgst:-false} 00:07:39.321 }, 00:07:39.321 "method": "bdev_nvme_attach_controller" 00:07:39.321 } 00:07:39.321 EOF 00:07:39.321 )") 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3240323 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.321 { 00:07:39.321 "params": { 00:07:39.321 "name": "Nvme$subsystem", 00:07:39.321 "trtype": "$TEST_TRANSPORT", 00:07:39.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.321 "adrfam": "ipv4", 00:07:39.321 "trsvcid": "$NVMF_PORT", 00:07:39.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.321 "hdgst": ${hdgst:-false}, 00:07:39.321 "ddgst": ${ddgst:-false} 00:07:39.321 }, 00:07:39.321 "method": "bdev_nvme_attach_controller" 00:07:39.321 } 00:07:39.321 EOF 00:07:39.321 )") 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3240326 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.321 { 00:07:39.321 "params": { 00:07:39.321 "name": "Nvme$subsystem", 00:07:39.321 "trtype": "$TEST_TRANSPORT", 00:07:39.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.321 "adrfam": "ipv4", 00:07:39.321 "trsvcid": "$NVMF_PORT", 00:07:39.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.321 "hdgst": ${hdgst:-false}, 00:07:39.321 "ddgst": ${ddgst:-false} 00:07:39.321 }, 00:07:39.321 "method": "bdev_nvme_attach_controller" 00:07:39.321 } 00:07:39.321 EOF 00:07:39.321 )") 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.321 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.321 { 00:07:39.321 "params": { 00:07:39.321 "name": "Nvme$subsystem", 00:07:39.321 "trtype": "$TEST_TRANSPORT", 00:07:39.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.322 "adrfam": "ipv4", 00:07:39.322 "trsvcid": "$NVMF_PORT", 00:07:39.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.322 "hdgst": ${hdgst:-false}, 00:07:39.322 "ddgst": ${ddgst:-false} 00:07:39.322 }, 00:07:39.322 "method": "bdev_nvme_attach_controller" 00:07:39.322 } 00:07:39.322 EOF 00:07:39.322 )") 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3240319 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:39.322 "params": { 00:07:39.322 "name": "Nvme1", 00:07:39.322 "trtype": "tcp", 00:07:39.322 "traddr": "10.0.0.2", 00:07:39.322 "adrfam": "ipv4", 00:07:39.322 "trsvcid": "4420", 00:07:39.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:39.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:39.322 "hdgst": false, 00:07:39.322 "ddgst": false 00:07:39.322 }, 00:07:39.322 "method": "bdev_nvme_attach_controller" 00:07:39.322 }' 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:39.322 "params": { 00:07:39.322 "name": "Nvme1", 00:07:39.322 "trtype": "tcp", 00:07:39.322 "traddr": "10.0.0.2", 00:07:39.322 "adrfam": "ipv4", 00:07:39.322 "trsvcid": "4420", 00:07:39.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:39.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:39.322 "hdgst": false, 00:07:39.322 "ddgst": false 00:07:39.322 }, 00:07:39.322 "method": "bdev_nvme_attach_controller" 00:07:39.322 }' 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:39.322 "params": { 00:07:39.322 "name": "Nvme1", 00:07:39.322 "trtype": "tcp", 00:07:39.322 "traddr": "10.0.0.2", 00:07:39.322 "adrfam": "ipv4", 00:07:39.322 "trsvcid": "4420", 00:07:39.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:39.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:39.322 "hdgst": false, 00:07:39.322 "ddgst": false 00:07:39.322 }, 00:07:39.322 "method": "bdev_nvme_attach_controller" 00:07:39.322 }' 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:39.322 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:39.322 "params": { 00:07:39.322 "name": "Nvme1", 00:07:39.322 "trtype": "tcp", 00:07:39.322 "traddr": "10.0.0.2", 00:07:39.322 "adrfam": "ipv4", 00:07:39.322 "trsvcid": "4420", 00:07:39.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:39.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:39.322 "hdgst": false, 00:07:39.322 "ddgst": false 00:07:39.322 }, 00:07:39.322 "method": "bdev_nvme_attach_controller" 00:07:39.322 }' 00:07:39.322 [2024-12-06 11:06:45.419850] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:39.322 [2024-12-06 11:06:45.419905] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:39.322 [2024-12-06 11:06:45.420694] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:39.322 [2024-12-06 11:06:45.420741] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:39.322 [2024-12-06 11:06:45.420766] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:39.322 [2024-12-06 11:06:45.420835] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:39.322 [2024-12-06 11:06:45.422929] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:39.322 [2024-12-06 11:06:45.422978] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:39.582 [2024-12-06 11:06:45.587339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.582 [2024-12-06 11:06:45.615721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:39.582 [2024-12-06 11:06:45.645892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.582 [2024-12-06 11:06:45.674491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:39.582 [2024-12-06 11:06:45.708518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.582 [2024-12-06 11:06:45.738120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:39.842 [2024-12-06 11:06:45.757449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.843 [2024-12-06 11:06:45.785371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:39.843 Running I/O for 1 seconds... 00:07:39.843 Running I/O for 1 seconds... 00:07:39.843 Running I/O for 1 seconds... 00:07:39.843 Running I/O for 1 seconds... 00:07:40.783 10765.00 IOPS, 42.05 MiB/s 00:07:40.783 Latency(us) 00:07:40.783 [2024-12-06T10:06:46.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.783 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:40.783 Nvme1n1 : 1.01 10789.17 42.15 0.00 0.00 11801.10 5242.88 16056.32 00:07:40.783 [2024-12-06T10:06:46.950Z] =================================================================================================================== 00:07:40.783 [2024-12-06T10:06:46.950Z] Total : 10789.17 42.15 0.00 0.00 11801.10 5242.88 16056.32 00:07:40.783 10087.00 IOPS, 39.40 MiB/s 00:07:40.783 Latency(us) 00:07:40.783 [2024-12-06T10:06:46.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.783 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:40.783 Nvme1n1 : 1.01 10173.41 39.74 0.00 0.00 12554.16 3058.35 24794.45 00:07:40.783 [2024-12-06T10:06:46.950Z] =================================================================================================================== 00:07:40.783 [2024-12-06T10:06:46.950Z] Total : 10173.41 39.74 0.00 0.00 12554.16 3058.35 24794.45 00:07:41.043 14820.00 IOPS, 57.89 MiB/s [2024-12-06T10:06:47.210Z] 181552.00 IOPS, 709.19 MiB/s 00:07:41.043 Latency(us) 00:07:41.043 [2024-12-06T10:06:47.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.043 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:41.043 Nvme1n1 : 1.00 181194.71 707.79 0.00 0.00 702.23 296.96 1966.08 00:07:41.043 [2024-12-06T10:06:47.210Z] =================================================================================================================== 00:07:41.043 [2024-12-06T10:06:47.210Z] Total : 181194.71 707.79 0.00 0.00 702.23 296.96 1966.08 00:07:41.043 00:07:41.043 Latency(us) 00:07:41.043 [2024-12-06T10:06:47.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.043 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:41.043 Nvme1n1 : 1.01 14896.53 58.19 0.00 0.00 8569.35 3877.55 20534.61 00:07:41.043 [2024-12-06T10:06:47.210Z] =================================================================================================================== 00:07:41.043 [2024-12-06T10:06:47.210Z] Total : 14896.53 58.19 0.00 0.00 8569.35 3877.55 20534.61 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3240321 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3240323 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3240326 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.043 rmmod nvme_tcp 00:07:41.043 rmmod nvme_fabrics 00:07:41.043 rmmod nvme_keyring 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3240289 ']' 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3240289 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3240289 ']' 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3240289 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.043 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3240289 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3240289' 00:07:41.303 killing process with pid 3240289 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3240289 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3240289 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.303 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:43.883 00:07:43.883 real 0m13.037s 00:07:43.883 user 0m16.356s 00:07:43.883 sys 0m7.685s 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.883 ************************************ 00:07:43.883 END TEST nvmf_bdev_io_wait 00:07:43.883 ************************************ 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.883 ************************************ 00:07:43.883 START TEST nvmf_queue_depth 00:07:43.883 ************************************ 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:43.883 * Looking for test storage... 00:07:43.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:43.883 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:43.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.884 --rc genhtml_branch_coverage=1 00:07:43.884 --rc genhtml_function_coverage=1 00:07:43.884 --rc genhtml_legend=1 00:07:43.884 --rc geninfo_all_blocks=1 00:07:43.884 --rc geninfo_unexecuted_blocks=1 00:07:43.884 00:07:43.884 ' 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:43.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.884 --rc genhtml_branch_coverage=1 00:07:43.884 --rc genhtml_function_coverage=1 00:07:43.884 --rc genhtml_legend=1 00:07:43.884 --rc geninfo_all_blocks=1 00:07:43.884 --rc geninfo_unexecuted_blocks=1 00:07:43.884 00:07:43.884 ' 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:43.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.884 --rc genhtml_branch_coverage=1 00:07:43.884 --rc genhtml_function_coverage=1 00:07:43.884 --rc genhtml_legend=1 00:07:43.884 --rc geninfo_all_blocks=1 00:07:43.884 --rc geninfo_unexecuted_blocks=1 00:07:43.884 00:07:43.884 ' 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:43.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.884 --rc genhtml_branch_coverage=1 00:07:43.884 --rc genhtml_function_coverage=1 00:07:43.884 --rc genhtml_legend=1 00:07:43.884 --rc geninfo_all_blocks=1 00:07:43.884 --rc geninfo_unexecuted_blocks=1 00:07:43.884 00:07:43.884 ' 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:43.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:43.884 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:52.130 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.130 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:52.131 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:52.131 Found net devices under 0000:31:00.0: cvl_0_0 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:52.131 Found net devices under 0000:31:00.1: cvl_0_1 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:07:52.131 00:07:52.131 --- 10.0.0.2 ping statistics --- 00:07:52.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.131 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:07:52.131 00:07:52.131 --- 10.0.0.1 ping statistics --- 00:07:52.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.131 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.131 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3245536 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3245536 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3245536 ']' 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.131 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:52.131 [2024-12-06 11:06:58.110233] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:52.131 [2024-12-06 11:06:58.110299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.131 [2024-12-06 11:06:58.222784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.131 [2024-12-06 11:06:58.274241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.131 [2024-12-06 11:06:58.274288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.131 [2024-12-06 11:06:58.274297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.131 [2024-12-06 11:06:58.274304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.131 [2024-12-06 11:06:58.274310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.131 [2024-12-06 11:06:58.275127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.072 [2024-12-06 11:06:58.969018] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.072 Malloc0 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.072 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.072 [2024-12-06 11:06:59.018225] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3245729 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3245729 /var/tmp/bdevperf.sock 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3245729 ']' 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.072 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:53.072 [2024-12-06 11:06:59.074852] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:53.072 [2024-12-06 11:06:59.074920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245729 ] 00:07:53.072 [2024-12-06 11:06:59.157614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.072 [2024-12-06 11:06:59.199074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.013 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.013 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:54.013 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:54.013 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.013 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:54.013 NVMe0n1 00:07:54.013 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.013 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.013 Running I/O for 10 seconds... 00:07:56.360 8341.00 IOPS, 32.58 MiB/s [2024-12-06T10:07:03.465Z] 8701.50 IOPS, 33.99 MiB/s [2024-12-06T10:07:04.403Z] 9640.67 IOPS, 37.66 MiB/s [2024-12-06T10:07:05.341Z] 10223.00 IOPS, 39.93 MiB/s [2024-12-06T10:07:06.279Z] 10544.60 IOPS, 41.19 MiB/s [2024-12-06T10:07:07.221Z] 10738.17 IOPS, 41.95 MiB/s [2024-12-06T10:07:08.605Z] 10868.71 IOPS, 42.46 MiB/s [2024-12-06T10:07:09.175Z] 10998.00 IOPS, 42.96 MiB/s [2024-12-06T10:07:10.559Z] 11064.44 IOPS, 43.22 MiB/s [2024-12-06T10:07:10.559Z] 11131.10 IOPS, 43.48 MiB/s 00:08:04.392 Latency(us) 00:08:04.392 [2024-12-06T10:07:10.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.392 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:04.392 Verification LBA range: start 0x0 length 0x4000 00:08:04.392 NVMe0n1 : 10.07 11150.38 43.56 0.00 0.00 91462.49 24576.00 73400.32 00:08:04.392 [2024-12-06T10:07:10.559Z] =================================================================================================================== 00:08:04.392 [2024-12-06T10:07:10.559Z] Total : 11150.38 43.56 0.00 0.00 91462.49 24576.00 73400.32 00:08:04.392 { 00:08:04.392 "results": [ 00:08:04.392 { 00:08:04.392 "job": "NVMe0n1", 00:08:04.392 "core_mask": "0x1", 00:08:04.392 "workload": "verify", 00:08:04.392 "status": "finished", 00:08:04.392 "verify_range": { 00:08:04.392 "start": 0, 00:08:04.392 "length": 16384 00:08:04.392 }, 00:08:04.392 "queue_depth": 1024, 00:08:04.392 "io_size": 4096, 00:08:04.392 "runtime": 10.070598, 00:08:04.392 "iops": 11150.380543439427, 00:08:04.392 "mibps": 43.55617399781026, 00:08:04.392 "io_failed": 0, 00:08:04.392 "io_timeout": 0, 00:08:04.392 "avg_latency_us": 91462.48782134513, 00:08:04.392 "min_latency_us": 24576.0, 00:08:04.392 "max_latency_us": 73400.32 00:08:04.392 } 00:08:04.392 ], 00:08:04.392 "core_count": 1 00:08:04.392 } 00:08:04.392 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3245729 00:08:04.392 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3245729 ']' 00:08:04.392 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3245729 00:08:04.392 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:04.392 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3245729 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3245729' 00:08:04.393 killing process with pid 3245729 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3245729 00:08:04.393 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.393 00:08:04.393 Latency(us) 00:08:04.393 [2024-12-06T10:07:10.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.393 [2024-12-06T10:07:10.560Z] =================================================================================================================== 00:08:04.393 [2024-12-06T10:07:10.560Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3245729 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.393 rmmod nvme_tcp 00:08:04.393 rmmod nvme_fabrics 00:08:04.393 rmmod nvme_keyring 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3245536 ']' 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3245536 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3245536 ']' 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3245536 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.393 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3245536 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3245536' 00:08:04.655 killing process with pid 3245536 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3245536 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3245536 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.655 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.203 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.203 00:08:07.203 real 0m23.310s 00:08:07.203 user 0m25.965s 00:08:07.203 sys 0m7.632s 00:08:07.203 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.203 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.203 ************************************ 00:08:07.203 END TEST nvmf_queue_depth 00:08:07.203 ************************************ 00:08:07.203 11:07:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:07.203 11:07:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.203 11:07:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.203 11:07:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.203 ************************************ 00:08:07.203 START TEST nvmf_target_multipath 00:08:07.203 ************************************ 00:08:07.203 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:07.203 * Looking for test storage... 00:08:07.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.203 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:07.203 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:07.203 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:07.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.203 --rc genhtml_branch_coverage=1 00:08:07.203 --rc genhtml_function_coverage=1 00:08:07.203 --rc genhtml_legend=1 00:08:07.203 --rc geninfo_all_blocks=1 00:08:07.203 --rc geninfo_unexecuted_blocks=1 00:08:07.203 00:08:07.203 ' 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:07.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.203 --rc genhtml_branch_coverage=1 00:08:07.203 --rc genhtml_function_coverage=1 00:08:07.203 --rc genhtml_legend=1 00:08:07.203 --rc geninfo_all_blocks=1 00:08:07.203 --rc geninfo_unexecuted_blocks=1 00:08:07.203 00:08:07.203 ' 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:07.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.203 --rc genhtml_branch_coverage=1 00:08:07.203 --rc genhtml_function_coverage=1 00:08:07.203 --rc genhtml_legend=1 00:08:07.203 --rc geninfo_all_blocks=1 00:08:07.203 --rc geninfo_unexecuted_blocks=1 00:08:07.203 00:08:07.203 ' 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:07.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.203 --rc genhtml_branch_coverage=1 00:08:07.203 --rc genhtml_function_coverage=1 00:08:07.203 --rc genhtml_legend=1 00:08:07.203 --rc geninfo_all_blocks=1 00:08:07.203 --rc geninfo_unexecuted_blocks=1 00:08:07.203 00:08:07.203 ' 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.203 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.204 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:15.347 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:15.347 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.347 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:15.348 Found net devices under 0000:31:00.0: cvl_0_0 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:15.348 Found net devices under 0000:31:00.1: cvl_0_1 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.348 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:08:15.608 00:08:15.608 --- 10.0.0.2 ping statistics --- 00:08:15.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.608 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:08:15.608 00:08:15.608 --- 10.0.0.1 ping statistics --- 00:08:15.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.608 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:15.608 only one NIC for nvmf test 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:15.608 rmmod nvme_tcp 00:08:15.608 rmmod nvme_fabrics 00:08:15.608 rmmod nvme_keyring 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:15.608 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:15.609 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.609 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:15.609 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.609 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.609 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:18.151 00:08:18.151 real 0m10.880s 00:08:18.151 user 0m2.409s 00:08:18.151 sys 0m6.414s 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:18.151 ************************************ 00:08:18.151 END TEST nvmf_target_multipath 00:08:18.151 ************************************ 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.151 ************************************ 00:08:18.151 START TEST nvmf_zcopy 00:08:18.151 ************************************ 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:18.151 * Looking for test storage... 00:08:18.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:18.151 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:18.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.151 --rc genhtml_branch_coverage=1 00:08:18.151 --rc genhtml_function_coverage=1 00:08:18.151 --rc genhtml_legend=1 00:08:18.151 --rc geninfo_all_blocks=1 00:08:18.151 --rc geninfo_unexecuted_blocks=1 00:08:18.151 00:08:18.151 ' 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:18.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.151 --rc genhtml_branch_coverage=1 00:08:18.151 --rc genhtml_function_coverage=1 00:08:18.151 --rc genhtml_legend=1 00:08:18.151 --rc geninfo_all_blocks=1 00:08:18.151 --rc geninfo_unexecuted_blocks=1 00:08:18.151 00:08:18.151 ' 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:18.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.151 --rc genhtml_branch_coverage=1 00:08:18.151 --rc genhtml_function_coverage=1 00:08:18.151 --rc genhtml_legend=1 00:08:18.151 --rc geninfo_all_blocks=1 00:08:18.151 --rc geninfo_unexecuted_blocks=1 00:08:18.151 00:08:18.151 ' 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:18.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.151 --rc genhtml_branch_coverage=1 00:08:18.151 --rc genhtml_function_coverage=1 00:08:18.151 --rc genhtml_legend=1 00:08:18.151 --rc geninfo_all_blocks=1 00:08:18.151 --rc geninfo_unexecuted_blocks=1 00:08:18.151 00:08:18.151 ' 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.151 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:18.152 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:26.298 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:26.298 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.298 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:26.299 Found net devices under 0000:31:00.0: cvl_0_0 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:26.299 Found net devices under 0000:31:00.1: cvl_0_1 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:26.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:08:26.299 00:08:26.299 --- 10.0.0.2 ping statistics --- 00:08:26.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.299 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:08:26.299 00:08:26.299 --- 10.0.0.1 ping statistics --- 00:08:26.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.299 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3257468 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3257468 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3257468 ']' 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.299 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.299 [2024-12-06 11:07:32.440813] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:08:26.299 [2024-12-06 11:07:32.440890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.561 [2024-12-06 11:07:32.549897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.562 [2024-12-06 11:07:32.599636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.562 [2024-12-06 11:07:32.599687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.562 [2024-12-06 11:07:32.599697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.562 [2024-12-06 11:07:32.599705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.562 [2024-12-06 11:07:32.599712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.562 [2024-12-06 11:07:32.600501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.135 [2024-12-06 11:07:33.267907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.135 [2024-12-06 11:07:33.284098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.135 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.396 malloc0 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:27.396 { 00:08:27.396 "params": { 00:08:27.396 "name": "Nvme$subsystem", 00:08:27.396 "trtype": "$TEST_TRANSPORT", 00:08:27.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.396 "adrfam": "ipv4", 00:08:27.396 "trsvcid": "$NVMF_PORT", 00:08:27.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.396 "hdgst": ${hdgst:-false}, 00:08:27.396 "ddgst": ${ddgst:-false} 00:08:27.396 }, 00:08:27.396 "method": "bdev_nvme_attach_controller" 00:08:27.396 } 00:08:27.396 EOF 00:08:27.396 )") 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:27.396 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:27.396 "params": { 00:08:27.396 "name": "Nvme1", 00:08:27.396 "trtype": "tcp", 00:08:27.396 "traddr": "10.0.0.2", 00:08:27.396 "adrfam": "ipv4", 00:08:27.396 "trsvcid": "4420", 00:08:27.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.397 "hdgst": false, 00:08:27.397 "ddgst": false 00:08:27.397 }, 00:08:27.397 "method": "bdev_nvme_attach_controller" 00:08:27.397 }' 00:08:27.397 [2024-12-06 11:07:33.370172] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:08:27.397 [2024-12-06 11:07:33.370252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257817 ] 00:08:27.397 [2024-12-06 11:07:33.454206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.397 [2024-12-06 11:07:33.492826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.658 Running I/O for 10 seconds... 00:08:29.540 6727.00 IOPS, 52.55 MiB/s [2024-12-06T10:07:37.089Z] 6789.00 IOPS, 53.04 MiB/s [2024-12-06T10:07:38.030Z] 6805.00 IOPS, 53.16 MiB/s [2024-12-06T10:07:38.971Z] 6816.00 IOPS, 53.25 MiB/s [2024-12-06T10:07:39.912Z] 7293.20 IOPS, 56.98 MiB/s [2024-12-06T10:07:40.852Z] 7724.33 IOPS, 60.35 MiB/s [2024-12-06T10:07:41.792Z] 8018.71 IOPS, 62.65 MiB/s [2024-12-06T10:07:42.735Z] 8249.75 IOPS, 64.45 MiB/s [2024-12-06T10:07:44.120Z] 8431.67 IOPS, 65.87 MiB/s [2024-12-06T10:07:44.120Z] 8577.80 IOPS, 67.01 MiB/s 00:08:37.953 Latency(us) 00:08:37.953 [2024-12-06T10:07:44.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.953 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:37.953 Verification LBA range: start 0x0 length 0x1000 00:08:37.953 Nvme1n1 : 10.01 8579.90 67.03 0.00 0.00 14865.80 1221.97 27743.57 00:08:37.953 [2024-12-06T10:07:44.120Z] =================================================================================================================== 00:08:37.953 [2024-12-06T10:07:44.120Z] Total : 8579.90 67.03 0.00 0.00 14865.80 1221.97 27743.57 00:08:37.953 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3259831 00:08:37.953 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:37.953 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:37.953 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:37.953 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:37.953 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:37.953 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:37.953 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:37.953 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:37.953 { 00:08:37.954 "params": { 00:08:37.954 "name": "Nvme$subsystem", 00:08:37.954 "trtype": "$TEST_TRANSPORT", 00:08:37.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.954 "adrfam": "ipv4", 00:08:37.954 "trsvcid": "$NVMF_PORT", 00:08:37.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.954 "hdgst": ${hdgst:-false}, 00:08:37.954 "ddgst": ${ddgst:-false} 00:08:37.954 }, 00:08:37.954 "method": "bdev_nvme_attach_controller" 00:08:37.954 } 00:08:37.954 EOF 00:08:37.954 )") 00:08:37.954 [2024-12-06 11:07:43.837227] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.837255] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:37.954 [2024-12-06 11:07:43.845218] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.845227] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:37.954 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:37.954 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:37.954 "params": { 00:08:37.954 "name": "Nvme1", 00:08:37.954 "trtype": "tcp", 00:08:37.954 "traddr": "10.0.0.2", 00:08:37.954 "adrfam": "ipv4", 00:08:37.954 "trsvcid": "4420", 00:08:37.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:37.954 "hdgst": false, 00:08:37.954 "ddgst": false 00:08:37.954 }, 00:08:37.954 "method": "bdev_nvme_attach_controller" 00:08:37.954 }' 00:08:37.954 [2024-12-06 11:07:43.853237] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.853245] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.861257] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.861264] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.873288] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.873296] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.881310] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.881317] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.889330] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.889339] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.894657] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:08:37.954 [2024-12-06 11:07:43.894704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259831 ] 00:08:37.954 [2024-12-06 11:07:43.897352] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.897359] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.905371] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.905378] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.913393] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.913399] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.921412] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.921418] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.929432] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.929439] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.937452] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.937459] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.945474] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.945481] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.953495] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.953502] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.961514] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.961520] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.969536] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.969542] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.970868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.954 [2024-12-06 11:07:43.977556] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.977564] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.985576] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.985583] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:43.993597] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:43.993605] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:44.001616] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:44.001624] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:44.006076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.954 [2024-12-06 11:07:44.009636] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:44.009644] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:44.017660] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:44.017669] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:44.025681] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:44.025692] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:44.033699] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:44.033710] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:44.041717] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:44.041727] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:44.049736] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:44.049745] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:44.057758] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:44.057766] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:44.065776] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:44.065784] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.954 [2024-12-06 11:07:44.073799] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.954 [2024-12-06 11:07:44.073805] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.955 [2024-12-06 11:07:44.081820] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.955 [2024-12-06 11:07:44.081827] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.955 [2024-12-06 11:07:44.089849] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.955 [2024-12-06 11:07:44.089871] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.955 [2024-12-06 11:07:44.097866] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.955 [2024-12-06 11:07:44.097875] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.955 [2024-12-06 11:07:44.105885] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.955 [2024-12-06 11:07:44.105894] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.955 [2024-12-06 11:07:44.113907] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.955 [2024-12-06 11:07:44.113928] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.121924] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.121933] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.129944] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.129952] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.137964] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.137970] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.145984] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.145991] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.154006] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.154012] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.162026] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.162033] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.170049] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.170058] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.178068] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.178074] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.186091] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.186097] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.194111] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.194117] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.202131] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.202138] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.210154] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.210162] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.218179] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.218186] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.226194] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.226201] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.234216] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.234222] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.242235] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.242241] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.250256] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.250264] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.258275] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.258282] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.266715] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.266729] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.274320] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.274330] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 Running I/O for 5 seconds... 00:08:38.217 [2024-12-06 11:07:44.282338] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.282345] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.292888] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.292904] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.301074] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.301088] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.309539] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.309554] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.318566] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.318581] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.326921] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.326935] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.336192] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.336206] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.344794] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.344808] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.353799] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.353813] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.362843] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.362857] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.371555] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.371570] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.217 [2024-12-06 11:07:44.380177] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.217 [2024-12-06 11:07:44.380191] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.389355] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.389369] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.398400] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.398414] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.407185] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.407200] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.415801] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.415816] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.424817] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.424832] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.433830] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.433849] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.442931] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.442946] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.451573] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.451588] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.460195] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.460209] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.468964] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.468978] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.477704] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.477718] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.486109] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.486124] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.494398] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.494412] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.503525] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.503539] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.512296] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.512309] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.520712] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.520726] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.529340] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.529355] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.538120] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.538134] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.546805] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.546820] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.555534] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.555549] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.564223] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.564237] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.573238] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.573253] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.581990] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.582005] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.590712] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.590726] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.479 [2024-12-06 11:07:44.599183] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.479 [2024-12-06 11:07:44.599201] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.480 [2024-12-06 11:07:44.607614] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.480 [2024-12-06 11:07:44.607628] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.480 [2024-12-06 11:07:44.616177] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.480 [2024-12-06 11:07:44.616192] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.480 [2024-12-06 11:07:44.624717] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.480 [2024-12-06 11:07:44.624732] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.480 [2024-12-06 11:07:44.633565] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.480 [2024-12-06 11:07:44.633580] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.480 [2024-12-06 11:07:44.642589] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.480 [2024-12-06 11:07:44.642604] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.650918] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.650933] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.659677] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.659692] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.668701] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.668716] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.677811] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.677825] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.686324] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.686338] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.695180] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.695195] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.704134] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.704149] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.712498] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.712512] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.721115] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.721129] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.729950] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.729965] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.738561] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.738575] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.747242] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.747256] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.755831] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.755846] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.764707] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.764725] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.773278] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.773293] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.782578] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.782593] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.791798] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.791812] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.800426] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.800440] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.808900] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.808914] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.817674] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.817689] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.826648] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.826663] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.835249] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.835263] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.843599] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.843614] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.852352] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.852367] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.861001] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.861016] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.870098] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.870112] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.878591] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.878605] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.887371] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.887385] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.896718] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.896733] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.742 [2024-12-06 11:07:44.905252] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.742 [2024-12-06 11:07:44.905267] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:44.913917] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:44.913932] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:44.922917] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:44.922931] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:44.930650] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:44.930668] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:44.939790] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:44.939805] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:44.948546] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:44.948561] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:44.957668] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:44.957683] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:44.966231] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:44.966245] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:44.975089] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:44.975104] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:44.984055] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:44.984069] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:44.992646] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:44.992660] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.001084] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.001098] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.009898] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.009912] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.018574] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.018589] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.027257] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.027271] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.036352] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.036366] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.045167] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.045181] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.053632] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.053646] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.062734] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.062749] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.071853] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.071872] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.080560] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.080576] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.089116] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.089130] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.097511] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.097526] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.106505] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.106519] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.115274] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.115288] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.124211] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.124225] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.132996] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.004 [2024-12-06 11:07:45.133011] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.004 [2024-12-06 11:07:45.142568] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-12-06 11:07:45.142582] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-12-06 11:07:45.150893] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-12-06 11:07:45.150908] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-12-06 11:07:45.159516] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-12-06 11:07:45.159530] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-12-06 11:07:45.168393] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-12-06 11:07:45.168407] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.177502] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.177516] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.186072] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.186086] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.194495] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.194510] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.203246] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.203260] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.211876] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.211891] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.220403] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.220417] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.229363] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.229377] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.238395] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.238409] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.246833] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.246846] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.255775] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.255790] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.264784] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.264798] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.273263] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.273278] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.282662] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.282677] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 19210.00 IOPS, 150.08 MiB/s [2024-12-06T10:07:45.433Z] [2024-12-06 11:07:45.291036] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.291051] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.299578] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.299592] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.307834] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.307849] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.316763] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.316777] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.325415] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.325429] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.334394] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.334408] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.342895] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.342909] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.351723] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.351737] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.360154] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.360168] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.369251] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.369265] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.377715] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.377729] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.386072] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.386086] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.394847] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.394864] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.403609] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.403623] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.411932] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.411946] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.420180] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.420197] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.266 [2024-12-06 11:07:45.428792] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.266 [2024-12-06 11:07:45.428806] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.437363] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.437378] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.446364] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.446378] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.455105] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.455119] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.463634] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.463650] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.472722] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.472736] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.481766] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.481780] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.490594] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.490608] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.499431] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.499445] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.508487] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.508501] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.517576] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.517589] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.526688] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.526703] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.535222] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.535236] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.544145] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.544159] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.552929] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.552943] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.561922] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.561936] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.570941] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.570955] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.579743] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.579757] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.588386] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.588404] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.597381] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.597395] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.605792] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.605806] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.614709] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.614723] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.623291] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.623305] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.632031] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.632045] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.640539] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.640553] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.526 [2024-12-06 11:07:45.649003] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.526 [2024-12-06 11:07:45.649017] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.527 [2024-12-06 11:07:45.658136] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.527 [2024-12-06 11:07:45.658150] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.527 [2024-12-06 11:07:45.666591] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.527 [2024-12-06 11:07:45.666605] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.527 [2024-12-06 11:07:45.675326] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.527 [2024-12-06 11:07:45.675340] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.527 [2024-12-06 11:07:45.684319] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.527 [2024-12-06 11:07:45.684333] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.527 [2024-12-06 11:07:45.693250] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.527 [2024-12-06 11:07:45.693264] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.702137] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.702152] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.710543] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.710557] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.719112] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.719126] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.727646] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.727660] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.736662] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.736676] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.744598] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.744612] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.753834] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.753852] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.762328] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.762342] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.771092] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.771107] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.779868] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.779883] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.789073] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.789087] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.797338] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.797352] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.805948] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.805962] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.814962] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.814976] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.823726] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.823740] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.832497] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.832511] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.840956] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.840970] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.849436] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.849450] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.858551] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.858565] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.867670] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.867685] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.876744] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.876758] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.885297] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.885310] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.894022] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.894036] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.902816] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.902830] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.911884] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.911898] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.920726] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.920743] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.929817] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.929832] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.938349] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.938363] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.787 [2024-12-06 11:07:45.946832] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.787 [2024-12-06 11:07:45.946846] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:45.955977] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:45.955992] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:45.964943] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:45.964957] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:45.973983] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:45.973997] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:45.982371] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:45.982385] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:45.991637] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:45.991652] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.000632] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.000647] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.009335] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.009349] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.018337] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.018352] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.027420] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.027434] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.035999] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.036013] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.045163] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.045177] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.054058] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.054072] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.062967] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.062981] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.071632] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.071646] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.080516] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.080531] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.089439] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.089453] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.098118] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.098133] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.106693] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.106707] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.115721] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.115735] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.124780] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.124795] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.133799] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.133815] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.142125] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.142139] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.151139] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.151153] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.159684] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.159699] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.049 [2024-12-06 11:07:46.168280] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.049 [2024-12-06 11:07:46.168294] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.050 [2024-12-06 11:07:46.176624] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.050 [2024-12-06 11:07:46.176639] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.050 [2024-12-06 11:07:46.185558] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.050 [2024-12-06 11:07:46.185572] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.050 [2024-12-06 11:07:46.194186] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.050 [2024-12-06 11:07:46.194200] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.050 [2024-12-06 11:07:46.203294] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.050 [2024-12-06 11:07:46.203308] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.050 [2024-12-06 11:07:46.211884] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.050 [2024-12-06 11:07:46.211899] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.219903] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.219918] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.228932] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.228946] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.237841] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.237855] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.246241] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.246255] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.254785] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.254799] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.263574] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.263589] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.272139] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.272153] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.281039] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.281053] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 19294.50 IOPS, 150.74 MiB/s [2024-12-06T10:07:46.479Z] [2024-12-06 11:07:46.289374] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.289388] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.298168] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.298183] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.306896] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.306911] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.315770] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.315784] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.324204] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.324219] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.333041] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.333056] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.341368] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.341383] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.350269] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.350284] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.359166] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.359180] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.368061] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.368076] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.377146] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.377160] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.385650] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.385665] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.394896] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.394911] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.404033] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.404048] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.412494] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.412508] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.421211] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.421226] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.429621] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.429636] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.438280] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.438294] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.447330] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.447345] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.456461] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.456476] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.464933] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.464948] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.312 [2024-12-06 11:07:46.473995] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.312 [2024-12-06 11:07:46.474010] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.482993] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.483008] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.490752] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.490767] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.500170] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.500184] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.508588] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.508602] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.517513] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.517527] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.525413] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.525428] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.534318] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.534332] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.543220] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.543234] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.552316] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.552330] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.560904] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.560918] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.569204] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.569218] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.577812] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.577830] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.586655] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.586670] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.595247] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.595262] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.603770] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.603785] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.612868] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.612883] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.621588] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.621602] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.630589] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.630604] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.574 [2024-12-06 11:07:46.639676] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.574 [2024-12-06 11:07:46.639690] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.575 [2024-12-06 11:07:46.648315] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.575 [2024-12-06 11:07:46.648330] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.575 [2024-12-06 11:07:46.657215] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.575 [2024-12-06 11:07:46.657229] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.575 [2024-12-06 11:07:46.666296] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.575 [2024-12-06 11:07:46.666311] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.575 [2024-12-06 11:07:46.674947] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.575 [2024-12-06 11:07:46.674961] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.575 [2024-12-06 11:07:46.684016] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.575 [2024-12-06 11:07:46.684030] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.575 [2024-12-06 11:07:46.692593] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.575 [2024-12-06 11:07:46.692607] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.575 [2024-12-06 11:07:46.701744] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.575 [2024-12-06 11:07:46.701759] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.575 [2024-12-06 11:07:46.710859] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.575 [2024-12-06 11:07:46.710877] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.575 [2024-12-06 11:07:46.719464] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.575 [2024-12-06 11:07:46.719478] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.575 [2024-12-06 11:07:46.728267] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.575 [2024-12-06 11:07:46.728282] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.575 [2024-12-06 11:07:46.736719] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.575 [2024-12-06 11:07:46.736733] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.745235] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.745257] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.754011] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.754026] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.762497] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.762512] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.771182] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.771197] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.779585] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.779599] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.788231] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.788246] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.797200] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.797214] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.805686] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.805700] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.814681] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.814695] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.823178] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.823192] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.831766] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.831780] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.840715] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.840729] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.849433] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.849447] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.857944] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.857958] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.866805] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.866819] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.875636] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.875650] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.884514] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.884528] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.893573] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.893588] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.902568] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.902582] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.911655] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.911672] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.920159] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.920173] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.929319] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.929333] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.937899] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.937913] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.946530] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.946544] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.955059] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.955073] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.964010] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.964024] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.972410] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.972424] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.981407] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.981422] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.989424] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.836 [2024-12-06 11:07:46.989438] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.836 [2024-12-06 11:07:46.998289] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.837 [2024-12-06 11:07:46.998303] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.097 [2024-12-06 11:07:47.006769] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.097 [2024-12-06 11:07:47.006783] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.097 [2024-12-06 11:07:47.015700] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.097 [2024-12-06 11:07:47.015714] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.097 [2024-12-06 11:07:47.024251] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.097 [2024-12-06 11:07:47.024265] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.097 [2024-12-06 11:07:47.033037] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.097 [2024-12-06 11:07:47.033051] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.097 [2024-12-06 11:07:47.040924] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.097 [2024-12-06 11:07:47.040938] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.097 [2024-12-06 11:07:47.050304] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.097 [2024-12-06 11:07:47.050318] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.097 [2024-12-06 11:07:47.058796] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.097 [2024-12-06 11:07:47.058811] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.097 [2024-12-06 11:07:47.067886] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.067900] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.076956] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.076974] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.085416] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.085429] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.094333] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.094347] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.103099] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.103114] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.112138] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.112152] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.120570] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.120584] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.129484] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.129498] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.138191] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.138205] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.146809] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.146823] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.156013] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.156027] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.164428] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.164442] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.172931] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.172945] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.181853] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.181871] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.190359] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.190373] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.198715] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.198729] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.208106] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.208120] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.215949] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.215963] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.224460] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.224474] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.233046] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.233060] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.242146] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.242160] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.250978] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.250992] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.098 [2024-12-06 11:07:47.258808] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.098 [2024-12-06 11:07:47.258822] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.268124] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.268138] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.277204] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.277218] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.285633] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.285648] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 19324.33 IOPS, 150.97 MiB/s [2024-12-06T10:07:47.526Z] [2024-12-06 11:07:47.293622] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.293637] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.302566] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.302581] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.311167] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.311182] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.320401] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.320415] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.329368] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.329383] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.338405] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.338419] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.346979] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.346993] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.355905] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.355919] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.364617] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.364631] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.373178] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.373192] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.381884] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.381898] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.390894] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.390908] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.399361] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.399375] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.408444] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.408458] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.417113] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.417127] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.425785] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.425799] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.434778] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.434792] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.443856] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.443874] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.452813] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.452827] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.461842] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.461856] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.470260] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.470274] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.478943] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.478957] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.487580] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.487594] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.496480] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.496495] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.504910] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.504924] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.513849] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.513867] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.359 [2024-12-06 11:07:47.522796] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.359 [2024-12-06 11:07:47.522810] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.531019] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.531033] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.540055] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.540070] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.548665] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.548679] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.557690] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.557704] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.566276] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.566290] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.575208] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.575222] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.583535] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.583550] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.592164] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.592178] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.600457] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.600471] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.609697] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.609712] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.618347] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.618361] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.627467] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.627481] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.635880] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.635894] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.644971] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.644986] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.653474] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.653489] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.662319] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.662334] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.671269] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.671283] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.680430] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.680444] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.689317] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.689331] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.697681] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.697695] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.706093] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.706107] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.714390] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.714404] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.723140] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.723154] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.731980] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.731997] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.740289] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.620 [2024-12-06 11:07:47.740303] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.620 [2024-12-06 11:07:47.749115] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.621 [2024-12-06 11:07:47.749129] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.621 [2024-12-06 11:07:47.757552] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.621 [2024-12-06 11:07:47.757566] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.621 [2024-12-06 11:07:47.766611] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.621 [2024-12-06 11:07:47.766625] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.621 [2024-12-06 11:07:47.775213] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.621 [2024-12-06 11:07:47.775228] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.621 [2024-12-06 11:07:47.783968] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.621 [2024-12-06 11:07:47.783982] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.792848] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.792868] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.802188] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.802203] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.811318] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.811332] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.820401] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.820416] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.829508] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.829522] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.837558] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.837573] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.846318] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.846332] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.855006] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.855021] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.864000] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.864014] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.872914] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.872929] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.881444] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.881459] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.890027] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.890041] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.898927] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.898945] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.907681] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.907696] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.916363] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.916378] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.925400] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.925414] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.933760] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.933774] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.942451] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.942465] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.951813] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.951828] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.960721] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.960735] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.969768] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.969783] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.977700] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.977715] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.986659] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.986674] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:47.995194] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:47.995209] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:48.003731] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:48.003746] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:48.012475] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:48.012490] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:48.020971] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:48.020986] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:48.029843] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:48.029858] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:48.038318] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:48.038333] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.882 [2024-12-06 11:07:48.047482] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.882 [2024-12-06 11:07:48.047497] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.056309] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.056323] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.065391] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.065409] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.074027] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.074041] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.091085] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.091100] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.099006] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.099020] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.107803] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.107817] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.116539] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.116553] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.125279] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.125294] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.134057] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.134072] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.142430] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.142444] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.151495] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.151510] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.159766] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.159780] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.168427] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.168442] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.177328] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.177343] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.186334] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.186350] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.195289] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.195304] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.204326] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.204340] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.213372] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.213386] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.221559] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.221573] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.230583] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.230598] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.239543] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.239562] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.247961] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.247975] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.256494] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.256509] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.265339] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.265353] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.274404] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.274418] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.283459] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.283474] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 19342.75 IOPS, 151.12 MiB/s [2024-12-06T10:07:48.311Z] [2024-12-06 11:07:48.292049] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.292063] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.300362] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.300377] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.144 [2024-12-06 11:07:48.309125] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.144 [2024-12-06 11:07:48.309140] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.318016] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.318030] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.326358] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.326372] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.335202] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.335217] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.343709] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.343723] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.352390] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.352405] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.361172] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.361186] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.370196] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.370210] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.378670] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.378685] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.387933] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.387947] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.395976] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.395991] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.404774] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.404788] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.413465] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.413480] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.422672] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.422686] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.430977] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.430991] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.439956] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.439971] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.449003] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.449017] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.405 [2024-12-06 11:07:48.457167] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.405 [2024-12-06 11:07:48.457181] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.465951] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.465965] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.474391] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.474405] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.483330] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.483344] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.492164] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.492178] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.500780] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.500794] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.509775] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.509790] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.518641] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.518655] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.527556] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.527570] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.536642] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.536656] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.545092] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.545107] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.553878] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.553892] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.563038] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.563052] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.406 [2024-12-06 11:07:48.571706] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.406 [2024-12-06 11:07:48.571720] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.580343] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.580358] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.588907] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.588921] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.597307] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.597321] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.606241] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.606255] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.615322] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.615336] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.624355] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.624369] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.633228] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.633242] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.642448] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.642462] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.651254] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.651269] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.659741] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.659755] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.668898] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.668912] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.677323] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.677337] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.685877] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.685891] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.694870] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.694885] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.703716] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.703731] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.712648] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.712662] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.721426] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.721440] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.730525] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.730543] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.739581] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.739595] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.748091] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.748105] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.757402] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.757416] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.765339] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.765352] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.774095] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.774109] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.782968] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.782982] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.791428] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.791442] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.800407] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.800421] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.809346] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.666 [2024-12-06 11:07:48.809359] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.666 [2024-12-06 11:07:48.817999] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.667 [2024-12-06 11:07:48.818012] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.667 [2024-12-06 11:07:48.826871] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.667 [2024-12-06 11:07:48.826885] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.835866] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.835880] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.844758] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.844772] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.854069] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.854083] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.862500] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.862513] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.871480] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.871493] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.879876] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.879890] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.888506] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.888519] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.897013] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.897032] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.905307] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.905321] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.914515] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.914529] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.923317] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.923331] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.932158] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.932172] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.940859] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.940877] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.949794] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.949809] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.958243] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.958257] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.967022] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.967036] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.975936] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.975951] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.984921] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.984935] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:48.993951] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:48.993965] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.002464] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.002479] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.011196] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.011210] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.020048] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.020063] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.029313] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.029327] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.037605] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.037620] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.046397] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.046411] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.055390] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.055404] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.064045] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.064062] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.072720] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.072734] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.081345] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.081359] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.089770] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.089784] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.098640] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.098654] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.107514] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.107529] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.116010] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.116024] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.124668] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.124682] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.063 [2024-12-06 11:07:49.133927] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.063 [2024-12-06 11:07:49.133941] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-06 11:07:49.142832] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-06 11:07:49.142846] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-06 11:07:49.151565] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-06 11:07:49.151580] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-06 11:07:49.160049] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-06 11:07:49.160064] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-06 11:07:49.169095] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-06 11:07:49.169109] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.178136] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.178151] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.186846] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.186859] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.195789] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.195804] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.204911] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.204926] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.213766] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.213781] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.222791] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.222805] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.231221] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.231243] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.239827] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.239841] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.248187] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.248201] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.256715] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.256729] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.265622] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.265636] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.274232] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.274246] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.283370] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.283384] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 19358.80 IOPS, 151.24 MiB/s [2024-12-06T10:07:49.581Z] [2024-12-06 11:07:49.292484] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.292498] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.298035] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.298048] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 00:08:43.414 Latency(us) 00:08:43.414 [2024-12-06T10:07:49.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.414 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:43.414 Nvme1n1 : 5.01 19359.55 151.25 0.00 0.00 6605.06 2607.79 16820.91 00:08:43.414 [2024-12-06T10:07:49.581Z] =================================================================================================================== 00:08:43.414 [2024-12-06T10:07:49.581Z] Total : 19359.55 151.25 0.00 0.00 6605.06 2607.79 16820.91 00:08:43.414 [2024-12-06 11:07:49.306053] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.306064] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.314081] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.314092] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.322096] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.322105] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.330116] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.330125] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.338135] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.338144] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.346155] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.346164] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.354173] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.354181] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.362191] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.362200] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.370212] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.370221] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.378232] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.378240] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.386253] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.386260] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.394274] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.394283] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.402293] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.402301] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 [2024-12-06 11:07:49.410313] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.414 [2024-12-06 11:07:49.410320] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3259831) - No such process 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3259831 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.414 delay0 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.414 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:43.414 [2024-12-06 11:07:49.555007] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:49.998 Initializing NVMe Controllers 00:08:49.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:49.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:49.998 Initialization complete. Launching workers. 00:08:49.998 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 301 00:08:49.998 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 588, failed to submit 33 00:08:49.998 success 403, unsuccessful 185, failed 0 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.998 rmmod nvme_tcp 00:08:49.998 rmmod nvme_fabrics 00:08:49.998 rmmod nvme_keyring 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3257468 ']' 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3257468 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3257468 ']' 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3257468 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3257468 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3257468' 00:08:49.998 killing process with pid 3257468 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3257468 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3257468 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.998 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.911 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.911 00:08:51.911 real 0m34.220s 00:08:51.911 user 0m44.439s 00:08:51.911 sys 0m10.924s 00:08:51.911 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.911 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.911 ************************************ 00:08:51.911 END TEST nvmf_zcopy 00:08:51.911 ************************************ 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.173 ************************************ 00:08:52.173 START TEST nvmf_nmic 00:08:52.173 ************************************ 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:52.173 * Looking for test storage... 00:08:52.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:52.173 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:52.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.174 --rc genhtml_branch_coverage=1 00:08:52.174 --rc genhtml_function_coverage=1 00:08:52.174 --rc genhtml_legend=1 00:08:52.174 --rc geninfo_all_blocks=1 00:08:52.174 --rc geninfo_unexecuted_blocks=1 00:08:52.174 00:08:52.174 ' 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:52.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.174 --rc genhtml_branch_coverage=1 00:08:52.174 --rc genhtml_function_coverage=1 00:08:52.174 --rc genhtml_legend=1 00:08:52.174 --rc geninfo_all_blocks=1 00:08:52.174 --rc geninfo_unexecuted_blocks=1 00:08:52.174 00:08:52.174 ' 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:52.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.174 --rc genhtml_branch_coverage=1 00:08:52.174 --rc genhtml_function_coverage=1 00:08:52.174 --rc genhtml_legend=1 00:08:52.174 --rc geninfo_all_blocks=1 00:08:52.174 --rc geninfo_unexecuted_blocks=1 00:08:52.174 00:08:52.174 ' 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:52.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.174 --rc genhtml_branch_coverage=1 00:08:52.174 --rc genhtml_function_coverage=1 00:08:52.174 --rc genhtml_legend=1 00:08:52.174 --rc geninfo_all_blocks=1 00:08:52.174 --rc geninfo_unexecuted_blocks=1 00:08:52.174 00:08:52.174 ' 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.174 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:52.436 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.436 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.436 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.436 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.436 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.436 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.436 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.436 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.436 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.436 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.436 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:52.437 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.583 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:00.584 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:00.584 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:00.584 Found net devices under 0000:31:00.0: cvl_0_0 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:00.584 Found net devices under 0000:31:00.1: cvl_0_1 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.584 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:09:00.585 00:09:00.585 --- 10.0.0.2 ping statistics --- 00:09:00.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.585 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:09:00.585 00:09:00.585 --- 10.0.0.1 ping statistics --- 00:09:00.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.585 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3266883 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3266883 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3266883 ']' 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.585 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.847 [2024-12-06 11:08:06.787400] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:09:00.847 [2024-12-06 11:08:06.787471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.847 [2024-12-06 11:08:06.880368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.847 [2024-12-06 11:08:06.923026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.847 [2024-12-06 11:08:06.923062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.847 [2024-12-06 11:08:06.923071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.847 [2024-12-06 11:08:06.923078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.847 [2024-12-06 11:08:06.923083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.847 [2024-12-06 11:08:06.924701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.847 [2024-12-06 11:08:06.924836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.847 [2024-12-06 11:08:06.924996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.847 [2024-12-06 11:08:06.924996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 [2024-12-06 11:08:07.644625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 Malloc0 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 [2024-12-06 11:08:07.715298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:01.792 test case1: single bdev can't be used in multiple subsystems 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 [2024-12-06 11:08:07.751222] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:01.792 [2024-12-06 11:08:07.751241] subsystem.c:2310:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:01.792 [2024-12-06 11:08:07.751249] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.792 request: 00:09:01.792 { 00:09:01.792 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:01.792 "namespace": { 00:09:01.792 "bdev_name": "Malloc0", 00:09:01.792 "no_auto_visible": false, 00:09:01.792 "hide_metadata": false 00:09:01.792 }, 00:09:01.792 "method": "nvmf_subsystem_add_ns", 00:09:01.792 "req_id": 1 00:09:01.792 } 00:09:01.792 Got JSON-RPC error response 00:09:01.792 response: 00:09:01.792 { 00:09:01.792 "code": -32602, 00:09:01.792 "message": "Invalid parameters" 00:09:01.792 } 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:01.792 Adding namespace failed - expected result. 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:01.792 test case2: host connect to nvmf target in multiple paths 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 [2024-12-06 11:08:07.763374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.792 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.178 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:05.090 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.090 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:05.090 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.090 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:05.090 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:07.006 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:07.006 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:07.006 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.006 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:07.006 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.006 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:07.006 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:07.006 [global] 00:09:07.006 thread=1 00:09:07.006 invalidate=1 00:09:07.006 rw=write 00:09:07.006 time_based=1 00:09:07.006 runtime=1 00:09:07.006 ioengine=libaio 00:09:07.006 direct=1 00:09:07.006 bs=4096 00:09:07.006 iodepth=1 00:09:07.006 norandommap=0 00:09:07.006 numjobs=1 00:09:07.006 00:09:07.006 verify_dump=1 00:09:07.006 verify_backlog=512 00:09:07.006 verify_state_save=0 00:09:07.006 do_verify=1 00:09:07.006 verify=crc32c-intel 00:09:07.006 [job0] 00:09:07.006 filename=/dev/nvme0n1 00:09:07.006 Could not set queue depth (nvme0n1) 00:09:07.266 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.266 fio-3.35 00:09:07.266 Starting 1 thread 00:09:08.652 00:09:08.652 job0: (groupid=0, jobs=1): err= 0: pid=3268423: Fri Dec 6 11:08:14 2024 00:09:08.652 read: IOPS=20, BW=81.3KiB/s (83.3kB/s)(84.0KiB/1033msec) 00:09:08.652 slat (nsec): min=9131, max=28141, avg=26108.76, stdev=3926.75 00:09:08.652 clat (usec): min=40901, max=42093, avg=41853.42, stdev=314.02 00:09:08.652 lat (usec): min=40928, max=42121, avg=41879.53, stdev=314.49 00:09:08.652 clat percentiles (usec): 00:09:08.652 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:09:08.652 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:08.652 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:08.652 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:08.652 | 99.99th=[42206] 00:09:08.652 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:09:08.652 slat (nsec): min=8819, max=79511, avg=22902.23, stdev=12923.58 00:09:08.652 clat (usec): min=126, max=711, avg=271.05, stdev=124.64 00:09:08.652 lat (usec): min=141, max=744, avg=293.95, stdev=130.21 00:09:08.652 clat percentiles (usec): 00:09:08.652 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:09:08.652 | 30.00th=[ 159], 40.00th=[ 221], 50.00th=[ 247], 60.00th=[ 265], 00:09:08.652 | 70.00th=[ 330], 80.00th=[ 375], 90.00th=[ 465], 95.00th=[ 537], 00:09:08.652 | 99.00th=[ 586], 99.50th=[ 652], 99.90th=[ 709], 99.95th=[ 709], 00:09:08.652 | 99.99th=[ 709] 00:09:08.652 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:08.652 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:08.652 lat (usec) : 250=49.72%, 500=40.53%, 750=5.82% 00:09:08.652 lat (msec) : 50=3.94% 00:09:08.652 cpu : usr=1.16%, sys=1.07%, ctx=533, majf=0, minf=1 00:09:08.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.652 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.652 00:09:08.652 Run status group 0 (all jobs): 00:09:08.652 READ: bw=81.3KiB/s (83.3kB/s), 81.3KiB/s-81.3KiB/s (83.3kB/s-83.3kB/s), io=84.0KiB (86.0kB), run=1033-1033msec 00:09:08.652 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:09:08.652 00:09:08.653 Disk stats (read/write): 00:09:08.653 nvme0n1: ios=67/512, merge=0/0, ticks=763/108, in_queue=871, util=93.49% 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.653 rmmod nvme_tcp 00:09:08.653 rmmod nvme_fabrics 00:09:08.653 rmmod nvme_keyring 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3266883 ']' 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3266883 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3266883 ']' 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3266883 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3266883 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3266883' 00:09:08.653 killing process with pid 3266883 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3266883 00:09:08.653 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3266883 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.914 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.825 11:08:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.825 00:09:10.825 real 0m18.809s 00:09:10.825 user 0m49.071s 00:09:10.825 sys 0m7.132s 00:09:10.825 11:08:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.825 11:08:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:10.825 ************************************ 00:09:10.825 END TEST nvmf_nmic 00:09:10.825 ************************************ 00:09:10.825 11:08:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:10.825 11:08:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.825 11:08:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.825 11:08:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.086 ************************************ 00:09:11.086 START TEST nvmf_fio_target 00:09:11.086 ************************************ 00:09:11.086 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:11.086 * Looking for test storage... 00:09:11.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.086 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:11.086 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:11.086 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:11.086 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:11.086 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.086 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.086 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.086 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:11.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.087 --rc genhtml_branch_coverage=1 00:09:11.087 --rc genhtml_function_coverage=1 00:09:11.087 --rc genhtml_legend=1 00:09:11.087 --rc geninfo_all_blocks=1 00:09:11.087 --rc geninfo_unexecuted_blocks=1 00:09:11.087 00:09:11.087 ' 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:11.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.087 --rc genhtml_branch_coverage=1 00:09:11.087 --rc genhtml_function_coverage=1 00:09:11.087 --rc genhtml_legend=1 00:09:11.087 --rc geninfo_all_blocks=1 00:09:11.087 --rc geninfo_unexecuted_blocks=1 00:09:11.087 00:09:11.087 ' 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:11.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.087 --rc genhtml_branch_coverage=1 00:09:11.087 --rc genhtml_function_coverage=1 00:09:11.087 --rc genhtml_legend=1 00:09:11.087 --rc geninfo_all_blocks=1 00:09:11.087 --rc geninfo_unexecuted_blocks=1 00:09:11.087 00:09:11.087 ' 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:11.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.087 --rc genhtml_branch_coverage=1 00:09:11.087 --rc genhtml_function_coverage=1 00:09:11.087 --rc genhtml_legend=1 00:09:11.087 --rc geninfo_all_blocks=1 00:09:11.087 --rc geninfo_unexecuted_blocks=1 00:09:11.087 00:09:11.087 ' 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.087 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.348 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.348 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.348 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.348 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.348 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.348 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:11.348 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:11.349 11:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:19.491 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:19.491 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:19.491 Found net devices under 0000:31:00.0: cvl_0_0 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.491 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:19.491 Found net devices under 0000:31:00.1: cvl_0_1 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:19.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:09:19.492 00:09:19.492 --- 10.0.0.2 ping statistics --- 00:09:19.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.492 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:19.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:09:19.492 00:09:19.492 --- 10.0.0.1 ping statistics --- 00:09:19.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.492 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3273460 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3273460 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3273460 ']' 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.492 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.752 [2024-12-06 11:08:25.675006] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:09:19.752 [2024-12-06 11:08:25.675057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.752 [2024-12-06 11:08:25.762930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:19.752 [2024-12-06 11:08:25.798626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.752 [2024-12-06 11:08:25.798658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.752 [2024-12-06 11:08:25.798666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.752 [2024-12-06 11:08:25.798673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.752 [2024-12-06 11:08:25.798679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.752 [2024-12-06 11:08:25.800432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.752 [2024-12-06 11:08:25.800546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.752 [2024-12-06 11:08:25.800699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.752 [2024-12-06 11:08:25.800701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.325 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.325 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:20.325 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:20.326 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:20.326 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.588 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.588 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:20.588 [2024-12-06 11:08:26.667920] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.588 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.850 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:20.850 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.111 11:08:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:21.111 11:08:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.372 11:08:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:21.372 11:08:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.372 11:08:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:21.372 11:08:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:21.635 11:08:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.896 11:08:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:21.896 11:08:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.896 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:21.896 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.158 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:22.158 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:22.419 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.681 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:22.681 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.681 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:22.681 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:22.942 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.204 [2024-12-06 11:08:29.142573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.204 11:08:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:23.204 11:08:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:23.466 11:08:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.380 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:25.380 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:25.380 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.380 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:25.380 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:25.380 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:27.292 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:27.292 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:27.292 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.292 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:27.292 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.292 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:27.292 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:27.292 [global] 00:09:27.292 thread=1 00:09:27.292 invalidate=1 00:09:27.292 rw=write 00:09:27.292 time_based=1 00:09:27.292 runtime=1 00:09:27.292 ioengine=libaio 00:09:27.292 direct=1 00:09:27.292 bs=4096 00:09:27.292 iodepth=1 00:09:27.292 norandommap=0 00:09:27.292 numjobs=1 00:09:27.292 00:09:27.292 verify_dump=1 00:09:27.292 verify_backlog=512 00:09:27.292 verify_state_save=0 00:09:27.292 do_verify=1 00:09:27.292 verify=crc32c-intel 00:09:27.292 [job0] 00:09:27.292 filename=/dev/nvme0n1 00:09:27.292 [job1] 00:09:27.292 filename=/dev/nvme0n2 00:09:27.292 [job2] 00:09:27.292 filename=/dev/nvme0n3 00:09:27.292 [job3] 00:09:27.292 filename=/dev/nvme0n4 00:09:27.292 Could not set queue depth (nvme0n1) 00:09:27.292 Could not set queue depth (nvme0n2) 00:09:27.292 Could not set queue depth (nvme0n3) 00:09:27.292 Could not set queue depth (nvme0n4) 00:09:27.553 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.553 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.553 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.553 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.553 fio-3.35 00:09:27.553 Starting 4 threads 00:09:28.950 00:09:28.950 job0: (groupid=0, jobs=1): err= 0: pid=3275361: Fri Dec 6 11:08:34 2024 00:09:28.950 read: IOPS=648, BW=2593KiB/s (2656kB/s)(2596KiB/1001msec) 00:09:28.950 slat (nsec): min=7018, max=56958, avg=23723.21, stdev=8581.57 00:09:28.950 clat (usec): min=302, max=952, avg=763.75, stdev=91.13 00:09:28.950 lat (usec): min=310, max=974, avg=787.47, stdev=93.38 00:09:28.950 clat percentiles (usec): 00:09:28.950 | 1.00th=[ 515], 5.00th=[ 619], 10.00th=[ 644], 20.00th=[ 685], 00:09:28.950 | 30.00th=[ 717], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 799], 00:09:28.950 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 865], 95.00th=[ 889], 00:09:28.950 | 99.00th=[ 914], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:09:28.950 | 99.99th=[ 955] 00:09:28.950 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:28.950 slat (nsec): min=9472, max=65959, avg=29634.93, stdev=11364.65 00:09:28.950 clat (usec): min=218, max=812, avg=436.96, stdev=88.20 00:09:28.950 lat (usec): min=230, max=847, avg=466.60, stdev=90.83 00:09:28.950 clat percentiles (usec): 00:09:28.950 | 1.00th=[ 247], 5.00th=[ 293], 10.00th=[ 330], 20.00th=[ 355], 00:09:28.950 | 30.00th=[ 375], 40.00th=[ 420], 50.00th=[ 445], 60.00th=[ 469], 00:09:28.950 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 545], 95.00th=[ 570], 00:09:28.950 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 791], 99.95th=[ 816], 00:09:28.950 | 99.99th=[ 816] 00:09:28.950 bw ( KiB/s): min= 4087, max= 4087, per=41.07%, avg=4087.00, stdev= 0.00, samples=1 00:09:28.950 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:28.950 lat (usec) : 250=0.78%, 500=46.86%, 750=28.33%, 1000=24.03% 00:09:28.950 cpu : usr=2.10%, sys=5.00%, ctx=1676, majf=0, minf=1 00:09:28.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.950 issued rwts: total=649,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.950 job1: (groupid=0, jobs=1): err= 0: pid=3275373: Fri Dec 6 11:08:34 2024 00:09:28.950 read: IOPS=16, BW=67.8KiB/s (69.4kB/s)(68.0KiB/1003msec) 00:09:28.950 slat (nsec): min=25058, max=26122, avg=25657.00, stdev=234.64 00:09:28.950 clat (usec): min=1119, max=42995, avg=39520.04, stdev=9903.45 00:09:28.950 lat (usec): min=1144, max=43021, avg=39545.70, stdev=9903.61 00:09:28.950 clat percentiles (usec): 00:09:28.950 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41157], 20.00th=[41681], 00:09:28.950 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:28.950 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:09:28.950 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:28.950 | 99.99th=[43254] 00:09:28.950 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:09:28.950 slat (nsec): min=9407, max=67218, avg=29734.52, stdev=8880.74 00:09:28.950 clat (usec): min=243, max=892, avg=609.71, stdev=110.57 00:09:28.950 lat (usec): min=254, max=925, avg=639.44, stdev=113.78 00:09:28.950 clat percentiles (usec): 00:09:28.950 | 1.00th=[ 363], 5.00th=[ 408], 10.00th=[ 469], 20.00th=[ 519], 00:09:28.950 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 635], 00:09:28.950 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 791], 00:09:28.950 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 898], 99.95th=[ 898], 00:09:28.950 | 99.99th=[ 898] 00:09:28.950 bw ( KiB/s): min= 4096, max= 4096, per=41.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:28.950 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:28.950 lat (usec) : 250=0.19%, 500=16.26%, 750=71.08%, 1000=9.26% 00:09:28.950 lat (msec) : 2=0.19%, 50=3.02% 00:09:28.950 cpu : usr=0.80%, sys=1.40%, ctx=529, majf=0, minf=1 00:09:28.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.950 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.950 job2: (groupid=0, jobs=1): err= 0: pid=3275383: Fri Dec 6 11:08:34 2024 00:09:28.950 read: IOPS=18, BW=73.9KiB/s (75.6kB/s)(76.0KiB/1029msec) 00:09:28.950 slat (nsec): min=9962, max=26101, avg=24983.58, stdev=3641.03 00:09:28.950 clat (usec): min=810, max=42992, avg=39950.82, stdev=9490.73 00:09:28.950 lat (usec): min=820, max=43018, avg=39975.81, stdev=9494.36 00:09:28.950 clat percentiles (usec): 00:09:28.950 | 1.00th=[ 807], 5.00th=[ 807], 10.00th=[41157], 20.00th=[41681], 00:09:28.950 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:28.950 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:09:28.950 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:28.950 | 99.99th=[43254] 00:09:28.950 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:09:28.950 slat (nsec): min=9803, max=51664, avg=30833.09, stdev=8529.21 00:09:28.950 clat (usec): min=150, max=950, avg=488.87, stdev=144.48 00:09:28.950 lat (usec): min=184, max=983, avg=519.70, stdev=147.48 00:09:28.950 clat percentiles (usec): 00:09:28.950 | 1.00th=[ 255], 5.00th=[ 277], 10.00th=[ 314], 20.00th=[ 355], 00:09:28.950 | 30.00th=[ 383], 40.00th=[ 445], 50.00th=[ 482], 60.00th=[ 523], 00:09:28.950 | 70.00th=[ 553], 80.00th=[ 611], 90.00th=[ 693], 95.00th=[ 734], 00:09:28.950 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 947], 99.95th=[ 947], 00:09:28.950 | 99.99th=[ 947] 00:09:28.950 bw ( KiB/s): min= 4096, max= 4096, per=41.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:28.950 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:28.950 lat (usec) : 250=0.75%, 500=51.60%, 750=40.11%, 1000=4.14% 00:09:28.950 lat (msec) : 50=3.39% 00:09:28.950 cpu : usr=0.88%, sys=1.36%, ctx=531, majf=0, minf=2 00:09:28.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.950 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.950 job3: (groupid=0, jobs=1): err= 0: pid=3275385: Fri Dec 6 11:08:34 2024 00:09:28.950 read: IOPS=62, BW=251KiB/s (258kB/s)(256KiB/1018msec) 00:09:28.950 slat (nsec): min=7712, max=46623, avg=26708.66, stdev=5548.70 00:09:28.950 clat (usec): min=812, max=43045, avg=11276.03, stdev=17906.96 00:09:28.950 lat (usec): min=822, max=43072, avg=11302.74, stdev=17907.18 00:09:28.950 clat percentiles (usec): 00:09:28.950 | 1.00th=[ 816], 5.00th=[ 832], 10.00th=[ 898], 20.00th=[ 963], 00:09:28.950 | 30.00th=[ 996], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[ 1106], 00:09:28.950 | 70.00th=[ 1156], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:28.950 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:28.950 | 99.99th=[43254] 00:09:28.950 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:09:28.950 slat (nsec): min=9720, max=67288, avg=31623.20, stdev=10317.45 00:09:28.950 clat (usec): min=154, max=881, avg=536.14, stdev=141.18 00:09:28.950 lat (usec): min=168, max=929, avg=567.76, stdev=144.32 00:09:28.950 clat percentiles (usec): 00:09:28.950 | 1.00th=[ 206], 5.00th=[ 302], 10.00th=[ 355], 20.00th=[ 412], 00:09:28.950 | 30.00th=[ 461], 40.00th=[ 498], 50.00th=[ 529], 60.00th=[ 570], 00:09:28.950 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 725], 95.00th=[ 783], 00:09:28.950 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 881], 99.95th=[ 881], 00:09:28.950 | 99.99th=[ 881] 00:09:28.950 bw ( KiB/s): min= 4087, max= 4087, per=41.07%, avg=4087.00, stdev= 0.00, samples=1 00:09:28.950 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:28.950 lat (usec) : 250=1.74%, 500=34.20%, 750=46.70%, 1000=9.72% 00:09:28.950 lat (msec) : 2=4.86%, 50=2.78% 00:09:28.950 cpu : usr=0.79%, sys=1.87%, ctx=577, majf=0, minf=1 00:09:28.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.950 issued rwts: total=64,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.950 00:09:28.950 Run status group 0 (all jobs): 00:09:28.950 READ: bw=2912KiB/s (2981kB/s), 67.8KiB/s-2593KiB/s (69.4kB/s-2656kB/s), io=2996KiB (3068kB), run=1001-1029msec 00:09:28.950 WRITE: bw=9951KiB/s (10.2MB/s), 1990KiB/s-4092KiB/s (2038kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1029msec 00:09:28.950 00:09:28.950 Disk stats (read/write): 00:09:28.950 nvme0n1: ios=537/909, merge=0/0, ticks=1345/382, in_queue=1727, util=97.60% 00:09:28.950 nvme0n2: ios=51/512, merge=0/0, ticks=553/308, in_queue=861, util=88.57% 00:09:28.950 nvme0n3: ios=14/512, merge=0/0, ticks=549/230, in_queue=779, util=88.57% 00:09:28.950 nvme0n4: ios=81/512, merge=0/0, ticks=1434/264, in_queue=1698, util=98.09% 00:09:28.950 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:28.950 [global] 00:09:28.950 thread=1 00:09:28.950 invalidate=1 00:09:28.950 rw=randwrite 00:09:28.950 time_based=1 00:09:28.950 runtime=1 00:09:28.950 ioengine=libaio 00:09:28.950 direct=1 00:09:28.950 bs=4096 00:09:28.950 iodepth=1 00:09:28.950 norandommap=0 00:09:28.950 numjobs=1 00:09:28.950 00:09:28.950 verify_dump=1 00:09:28.950 verify_backlog=512 00:09:28.950 verify_state_save=0 00:09:28.950 do_verify=1 00:09:28.950 verify=crc32c-intel 00:09:28.950 [job0] 00:09:28.950 filename=/dev/nvme0n1 00:09:28.950 [job1] 00:09:28.950 filename=/dev/nvme0n2 00:09:28.950 [job2] 00:09:28.950 filename=/dev/nvme0n3 00:09:28.950 [job3] 00:09:28.950 filename=/dev/nvme0n4 00:09:28.950 Could not set queue depth (nvme0n1) 00:09:28.950 Could not set queue depth (nvme0n2) 00:09:28.951 Could not set queue depth (nvme0n3) 00:09:28.951 Could not set queue depth (nvme0n4) 00:09:29.211 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.211 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.211 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.211 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.211 fio-3.35 00:09:29.211 Starting 4 threads 00:09:30.613 00:09:30.613 job0: (groupid=0, jobs=1): err= 0: pid=3275836: Fri Dec 6 11:08:36 2024 00:09:30.613 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:30.613 slat (nsec): min=26512, max=58823, avg=27618.39, stdev=3490.86 00:09:30.613 clat (usec): min=654, max=1284, avg=1056.77, stdev=88.75 00:09:30.613 lat (usec): min=682, max=1311, avg=1084.38, stdev=88.27 00:09:30.613 clat percentiles (usec): 00:09:30.613 | 1.00th=[ 816], 5.00th=[ 881], 10.00th=[ 922], 20.00th=[ 996], 00:09:30.613 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:09:30.613 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1172], 00:09:30.613 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1287], 99.95th=[ 1287], 00:09:30.613 | 99.99th=[ 1287] 00:09:30.613 write: IOPS=641, BW=2565KiB/s (2627kB/s)(2568KiB/1001msec); 0 zone resets 00:09:30.613 slat (nsec): min=8948, max=52187, avg=29179.02, stdev=9290.14 00:09:30.613 clat (usec): min=245, max=1006, avg=650.03, stdev=127.39 00:09:30.613 lat (usec): min=257, max=1039, avg=679.21, stdev=131.67 00:09:30.613 clat percentiles (usec): 00:09:30.613 | 1.00th=[ 334], 5.00th=[ 404], 10.00th=[ 486], 20.00th=[ 553], 00:09:30.613 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 693], 00:09:30.613 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 848], 00:09:30.613 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 1004], 99.95th=[ 1004], 00:09:30.613 | 99.99th=[ 1004] 00:09:30.613 bw ( KiB/s): min= 4096, max= 4096, per=34.10%, avg=4096.00, stdev= 0.00, samples=1 00:09:30.613 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:30.613 lat (usec) : 250=0.09%, 500=6.50%, 750=37.95%, 1000=20.62% 00:09:30.613 lat (msec) : 2=34.84% 00:09:30.613 cpu : usr=1.80%, sys=5.10%, ctx=1154, majf=0, minf=1 00:09:30.613 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.613 issued rwts: total=512,642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.613 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.613 job1: (groupid=0, jobs=1): err= 0: pid=3275851: Fri Dec 6 11:08:36 2024 00:09:30.613 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:30.613 slat (nsec): min=7316, max=47497, avg=28080.77, stdev=3266.74 00:09:30.613 clat (usec): min=403, max=1469, avg=939.41, stdev=168.23 00:09:30.613 lat (usec): min=431, max=1497, avg=967.49, stdev=168.46 00:09:30.613 clat percentiles (usec): 00:09:30.613 | 1.00th=[ 510], 5.00th=[ 627], 10.00th=[ 685], 20.00th=[ 791], 00:09:30.613 | 30.00th=[ 857], 40.00th=[ 938], 50.00th=[ 979], 60.00th=[ 1012], 00:09:30.613 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1139], 00:09:30.613 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1467], 99.95th=[ 1467], 00:09:30.613 | 99.99th=[ 1467] 00:09:30.613 write: IOPS=645, BW=2581KiB/s (2643kB/s)(2584KiB/1001msec); 0 zone resets 00:09:30.613 slat (nsec): min=9367, max=67550, avg=31324.97, stdev=9440.62 00:09:30.613 clat (usec): min=247, max=1283, avg=735.77, stdev=175.37 00:09:30.613 lat (usec): min=258, max=1317, avg=767.10, stdev=179.07 00:09:30.613 clat percentiles (usec): 00:09:30.613 | 1.00th=[ 281], 5.00th=[ 396], 10.00th=[ 469], 20.00th=[ 578], 00:09:30.613 | 30.00th=[ 668], 40.00th=[ 725], 50.00th=[ 766], 60.00th=[ 816], 00:09:30.613 | 70.00th=[ 857], 80.00th=[ 889], 90.00th=[ 930], 95.00th=[ 955], 00:09:30.613 | 99.00th=[ 1004], 99.50th=[ 1057], 99.90th=[ 1287], 99.95th=[ 1287], 00:09:30.613 | 99.99th=[ 1287] 00:09:30.613 bw ( KiB/s): min= 4096, max= 4096, per=34.10%, avg=4096.00, stdev= 0.00, samples=1 00:09:30.613 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:30.613 lat (usec) : 250=0.09%, 500=7.17%, 750=25.65%, 1000=46.55% 00:09:30.613 lat (msec) : 2=20.55% 00:09:30.613 cpu : usr=1.70%, sys=5.30%, ctx=1161, majf=0, minf=1 00:09:30.613 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.613 issued rwts: total=512,646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.613 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.613 job2: (groupid=0, jobs=1): err= 0: pid=3275870: Fri Dec 6 11:08:36 2024 00:09:30.613 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:30.613 slat (nsec): min=10300, max=45617, avg=27248.39, stdev=2271.12 00:09:30.613 clat (usec): min=690, max=1321, avg=975.66, stdev=102.87 00:09:30.613 lat (usec): min=717, max=1348, avg=1002.91, stdev=102.82 00:09:30.613 clat percentiles (usec): 00:09:30.613 | 1.00th=[ 725], 5.00th=[ 807], 10.00th=[ 840], 20.00th=[ 898], 00:09:30.613 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 979], 60.00th=[ 1004], 00:09:30.613 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1139], 00:09:30.613 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1319], 99.95th=[ 1319], 00:09:30.613 | 99.99th=[ 1319] 00:09:30.613 write: IOPS=693, BW=2773KiB/s (2840kB/s)(2776KiB/1001msec); 0 zone resets 00:09:30.613 slat (nsec): min=8971, max=68200, avg=30392.19, stdev=8789.49 00:09:30.613 clat (usec): min=321, max=963, avg=657.18, stdev=122.41 00:09:30.613 lat (usec): min=333, max=996, avg=687.58, stdev=125.52 00:09:30.613 clat percentiles (usec): 00:09:30.613 | 1.00th=[ 379], 5.00th=[ 437], 10.00th=[ 490], 20.00th=[ 562], 00:09:30.613 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 685], 00:09:30.613 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 857], 00:09:30.613 | 99.00th=[ 922], 99.50th=[ 930], 99.90th=[ 963], 99.95th=[ 963], 00:09:30.613 | 99.99th=[ 963] 00:09:30.613 bw ( KiB/s): min= 4096, max= 4096, per=34.10%, avg=4096.00, stdev= 0.00, samples=1 00:09:30.613 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:30.613 lat (usec) : 500=6.55%, 750=39.39%, 1000=36.32% 00:09:30.613 lat (msec) : 2=17.74% 00:09:30.613 cpu : usr=1.80%, sys=5.40%, ctx=1206, majf=0, minf=1 00:09:30.613 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.613 issued rwts: total=512,694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.613 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.613 job3: (groupid=0, jobs=1): err= 0: pid=3275877: Fri Dec 6 11:08:36 2024 00:09:30.613 read: IOPS=546, BW=2186KiB/s (2238kB/s)(2188KiB/1001msec) 00:09:30.613 slat (nsec): min=6687, max=46756, avg=27078.10, stdev=5026.56 00:09:30.613 clat (usec): min=461, max=1099, avg=791.07, stdev=126.83 00:09:30.613 lat (usec): min=488, max=1126, avg=818.15, stdev=127.26 00:09:30.613 clat percentiles (usec): 00:09:30.613 | 1.00th=[ 498], 5.00th=[ 578], 10.00th=[ 619], 20.00th=[ 676], 00:09:30.614 | 30.00th=[ 725], 40.00th=[ 758], 50.00th=[ 807], 60.00th=[ 840], 00:09:30.614 | 70.00th=[ 881], 80.00th=[ 914], 90.00th=[ 947], 95.00th=[ 971], 00:09:30.614 | 99.00th=[ 1037], 99.50th=[ 1074], 99.90th=[ 1106], 99.95th=[ 1106], 00:09:30.614 | 99.99th=[ 1106] 00:09:30.614 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:30.614 slat (nsec): min=8976, max=53104, avg=31983.04, stdev=7893.88 00:09:30.614 clat (usec): min=143, max=786, avg=495.70, stdev=128.11 00:09:30.614 lat (usec): min=175, max=820, avg=527.68, stdev=130.28 00:09:30.614 clat percentiles (usec): 00:09:30.614 | 1.00th=[ 196], 5.00th=[ 273], 10.00th=[ 306], 20.00th=[ 383], 00:09:30.614 | 30.00th=[ 420], 40.00th=[ 469], 50.00th=[ 510], 60.00th=[ 545], 00:09:30.614 | 70.00th=[ 570], 80.00th=[ 619], 90.00th=[ 660], 95.00th=[ 685], 00:09:30.614 | 99.00th=[ 734], 99.50th=[ 750], 99.90th=[ 766], 99.95th=[ 791], 00:09:30.614 | 99.99th=[ 791] 00:09:30.614 bw ( KiB/s): min= 4096, max= 4096, per=34.10%, avg=4096.00, stdev= 0.00, samples=1 00:09:30.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:30.614 lat (usec) : 250=1.46%, 500=30.11%, 750=46.47%, 1000=21.01% 00:09:30.614 lat (msec) : 2=0.95% 00:09:30.614 cpu : usr=3.30%, sys=6.40%, ctx=1571, majf=0, minf=1 00:09:30.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.614 issued rwts: total=547,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.614 00:09:30.614 Run status group 0 (all jobs): 00:09:30.614 READ: bw=8324KiB/s (8523kB/s), 2046KiB/s-2186KiB/s (2095kB/s-2238kB/s), io=8332KiB (8532kB), run=1001-1001msec 00:09:30.614 WRITE: bw=11.7MiB/s (12.3MB/s), 2565KiB/s-4092KiB/s (2627kB/s-4190kB/s), io=11.7MiB (12.3MB), run=1001-1001msec 00:09:30.614 00:09:30.614 Disk stats (read/write): 00:09:30.614 nvme0n1: ios=495/512, merge=0/0, ticks=462/278, in_queue=740, util=87.07% 00:09:30.614 nvme0n2: ios=488/512, merge=0/0, ticks=1380/296, in_queue=1676, util=99.90% 00:09:30.614 nvme0n3: ios=473/512, merge=0/0, ticks=432/277, in_queue=709, util=88.38% 00:09:30.614 nvme0n4: ios=551/764, merge=0/0, ticks=428/278, in_queue=706, util=91.76% 00:09:30.614 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:30.614 [global] 00:09:30.614 thread=1 00:09:30.614 invalidate=1 00:09:30.614 rw=write 00:09:30.614 time_based=1 00:09:30.614 runtime=1 00:09:30.614 ioengine=libaio 00:09:30.614 direct=1 00:09:30.614 bs=4096 00:09:30.614 iodepth=128 00:09:30.614 norandommap=0 00:09:30.614 numjobs=1 00:09:30.614 00:09:30.614 verify_dump=1 00:09:30.614 verify_backlog=512 00:09:30.614 verify_state_save=0 00:09:30.614 do_verify=1 00:09:30.614 verify=crc32c-intel 00:09:30.614 [job0] 00:09:30.614 filename=/dev/nvme0n1 00:09:30.614 [job1] 00:09:30.614 filename=/dev/nvme0n2 00:09:30.614 [job2] 00:09:30.614 filename=/dev/nvme0n3 00:09:30.614 [job3] 00:09:30.614 filename=/dev/nvme0n4 00:09:30.614 Could not set queue depth (nvme0n1) 00:09:30.614 Could not set queue depth (nvme0n2) 00:09:30.614 Could not set queue depth (nvme0n3) 00:09:30.614 Could not set queue depth (nvme0n4) 00:09:30.874 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.874 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.874 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.874 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.874 fio-3.35 00:09:30.874 Starting 4 threads 00:09:32.277 00:09:32.277 job0: (groupid=0, jobs=1): err= 0: pid=3276306: Fri Dec 6 11:08:38 2024 00:09:32.277 read: IOPS=6590, BW=25.7MiB/s (27.0MB/s)(26.0MiB/1010msec) 00:09:32.277 slat (nsec): min=887, max=19377k, avg=72268.41, stdev=642043.72 00:09:32.277 clat (usec): min=2913, max=55026, avg=9255.19, stdev=5827.27 00:09:32.277 lat (usec): min=2922, max=55053, avg=9327.46, stdev=5891.63 00:09:32.277 clat percentiles (usec): 00:09:32.277 | 1.00th=[ 3818], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6390], 00:09:32.277 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7898], 00:09:32.277 | 70.00th=[ 8291], 80.00th=[ 9765], 90.00th=[17695], 95.00th=[23462], 00:09:32.277 | 99.00th=[31065], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:32.277 | 99.99th=[54789] 00:09:32.277 write: IOPS=6707, BW=26.2MiB/s (27.5MB/s)(26.5MiB/1010msec); 0 zone resets 00:09:32.277 slat (nsec): min=1597, max=34037k, avg=66686.87, stdev=763920.41 00:09:32.277 clat (usec): min=795, max=73324, avg=9818.98, stdev=10281.89 00:09:32.277 lat (usec): min=804, max=73357, avg=9885.66, stdev=10351.03 00:09:32.277 clat percentiles (usec): 00:09:32.277 | 1.00th=[ 1991], 5.00th=[ 3687], 10.00th=[ 4359], 20.00th=[ 5735], 00:09:32.277 | 30.00th=[ 6390], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6849], 00:09:32.277 | 70.00th=[ 7111], 80.00th=[ 8717], 90.00th=[20579], 95.00th=[36963], 00:09:32.277 | 99.00th=[54789], 99.50th=[55313], 99.90th=[67634], 99.95th=[67634], 00:09:32.277 | 99.99th=[72877] 00:09:32.277 bw ( KiB/s): min=24304, max=28944, per=28.01%, avg=26624.00, stdev=3280.98, samples=2 00:09:32.277 iops : min= 6076, max= 7236, avg=6656.00, stdev=820.24, samples=2 00:09:32.277 lat (usec) : 1000=0.02% 00:09:32.277 lat (msec) : 2=0.53%, 4=4.21%, 10=77.76%, 20=8.80%, 50=7.71% 00:09:32.277 lat (msec) : 100=0.97% 00:09:32.277 cpu : usr=5.55%, sys=7.23%, ctx=509, majf=0, minf=1 00:09:32.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:32.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.278 issued rwts: total=6656,6775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.278 job1: (groupid=0, jobs=1): err= 0: pid=3276324: Fri Dec 6 11:08:38 2024 00:09:32.278 read: IOPS=8769, BW=34.3MiB/s (35.9MB/s)(34.5MiB/1007msec) 00:09:32.278 slat (nsec): min=1014, max=6704.8k, avg=54448.00, stdev=403424.00 00:09:32.278 clat (usec): min=2280, max=15356, avg=7516.91, stdev=1827.03 00:09:32.278 lat (usec): min=2285, max=15385, avg=7571.35, stdev=1848.55 00:09:32.278 clat percentiles (usec): 00:09:32.278 | 1.00th=[ 3752], 5.00th=[ 5407], 10.00th=[ 5669], 20.00th=[ 6063], 00:09:32.278 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7439], 00:09:32.278 | 70.00th=[ 8356], 80.00th=[ 9110], 90.00th=[10159], 95.00th=[11076], 00:09:32.278 | 99.00th=[12256], 99.50th=[12649], 99.90th=[14746], 99.95th=[14746], 00:09:32.278 | 99.99th=[15401] 00:09:32.278 write: IOPS=9151, BW=35.7MiB/s (37.5MB/s)(36.0MiB/1007msec); 0 zone resets 00:09:32.278 slat (nsec): min=1708, max=30157k, avg=50645.17, stdev=461771.15 00:09:32.278 clat (usec): min=1323, max=31480, avg=6263.96, stdev=1984.67 00:09:32.278 lat (usec): min=1333, max=31495, avg=6314.60, stdev=2019.13 00:09:32.278 clat percentiles (usec): 00:09:32.278 | 1.00th=[ 2212], 5.00th=[ 3851], 10.00th=[ 4146], 20.00th=[ 4883], 00:09:32.278 | 30.00th=[ 5538], 40.00th=[ 5997], 50.00th=[ 6325], 60.00th=[ 6521], 00:09:32.278 | 70.00th=[ 6718], 80.00th=[ 6980], 90.00th=[ 8586], 95.00th=[ 9503], 00:09:32.278 | 99.00th=[11863], 99.50th=[12649], 99.90th=[31589], 99.95th=[31589], 00:09:32.278 | 99.99th=[31589] 00:09:32.278 bw ( KiB/s): min=36856, max=36864, per=38.78%, avg=36860.00, stdev= 5.66, samples=2 00:09:32.278 iops : min= 9214, max= 9216, avg=9215.00, stdev= 1.41, samples=2 00:09:32.278 lat (msec) : 2=0.39%, 4=3.42%, 10=89.42%, 20=6.68%, 50=0.08% 00:09:32.278 cpu : usr=8.05%, sys=9.24%, ctx=605, majf=0, minf=1 00:09:32.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:32.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.278 issued rwts: total=8831,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.278 job2: (groupid=0, jobs=1): err= 0: pid=3276349: Fri Dec 6 11:08:38 2024 00:09:32.278 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:09:32.278 slat (nsec): min=919, max=22270k, avg=119029.05, stdev=878411.46 00:09:32.278 clat (usec): min=3490, max=76652, avg=16042.52, stdev=10045.69 00:09:32.278 lat (usec): min=3525, max=76665, avg=16161.54, stdev=10129.64 00:09:32.278 clat percentiles (usec): 00:09:32.278 | 1.00th=[ 4424], 5.00th=[ 6456], 10.00th=[ 7701], 20.00th=[ 9110], 00:09:32.278 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13173], 60.00th=[16581], 00:09:32.278 | 70.00th=[19006], 80.00th=[19268], 90.00th=[22152], 95.00th=[31327], 00:09:32.278 | 99.00th=[71828], 99.50th=[72877], 99.90th=[72877], 99.95th=[77071], 00:09:32.278 | 99.99th=[77071] 00:09:32.278 write: IOPS=3904, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1002msec); 0 zone resets 00:09:32.278 slat (nsec): min=1639, max=21201k, avg=126301.76, stdev=947582.25 00:09:32.278 clat (usec): min=710, max=71725, avg=17804.84, stdev=13216.05 00:09:32.278 lat (usec): min=745, max=71732, avg=17931.15, stdev=13324.55 00:09:32.278 clat percentiles (usec): 00:09:32.278 | 1.00th=[ 2933], 5.00th=[ 4228], 10.00th=[ 5604], 20.00th=[ 6259], 00:09:32.278 | 30.00th=[10683], 40.00th=[11207], 50.00th=[14353], 60.00th=[17695], 00:09:32.278 | 70.00th=[21627], 80.00th=[23462], 90.00th=[39584], 95.00th=[44827], 00:09:32.278 | 99.00th=[60556], 99.50th=[61080], 99.90th=[71828], 99.95th=[71828], 00:09:32.278 | 99.99th=[71828] 00:09:32.278 bw ( KiB/s): min=13896, max=16384, per=15.93%, avg=15140.00, stdev=1759.28, samples=2 00:09:32.278 iops : min= 3474, max= 4096, avg=3785.00, stdev=439.82, samples=2 00:09:32.278 lat (usec) : 750=0.01%, 1000=0.17% 00:09:32.278 lat (msec) : 2=0.09%, 4=1.56%, 10=23.75%, 20=49.53%, 50=21.85% 00:09:32.278 lat (msec) : 100=3.03% 00:09:32.278 cpu : usr=3.10%, sys=4.80%, ctx=331, majf=0, minf=1 00:09:32.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:32.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.278 issued rwts: total=3584,3912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.278 job3: (groupid=0, jobs=1): err= 0: pid=3276357: Fri Dec 6 11:08:38 2024 00:09:32.278 read: IOPS=3759, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1007msec) 00:09:32.278 slat (nsec): min=1072, max=17465k, avg=123007.82, stdev=887783.90 00:09:32.278 clat (usec): min=2823, max=50606, avg=15001.72, stdev=7150.04 00:09:32.278 lat (usec): min=3889, max=50615, avg=15124.73, stdev=7222.02 00:09:32.278 clat percentiles (usec): 00:09:32.278 | 1.00th=[ 4948], 5.00th=[ 6783], 10.00th=[ 8455], 20.00th=[10945], 00:09:32.278 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13829], 60.00th=[14222], 00:09:32.278 | 70.00th=[14746], 80.00th=[17957], 90.00th=[24249], 95.00th=[30802], 00:09:32.278 | 99.00th=[44303], 99.50th=[48497], 99.90th=[50594], 99.95th=[50594], 00:09:32.278 | 99.99th=[50594] 00:09:32.278 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:09:32.278 slat (nsec): min=1630, max=17847k, avg=119196.77, stdev=735862.88 00:09:32.278 clat (usec): min=506, max=50598, avg=17315.29, stdev=8474.06 00:09:32.278 lat (usec): min=571, max=51256, avg=17434.48, stdev=8530.57 00:09:32.278 clat percentiles (usec): 00:09:32.278 | 1.00th=[ 1037], 5.00th=[ 4817], 10.00th=[ 6718], 20.00th=[ 9896], 00:09:32.278 | 30.00th=[10945], 40.00th=[13304], 50.00th=[17695], 60.00th=[20841], 00:09:32.278 | 70.00th=[22938], 80.00th=[25297], 90.00th=[27132], 95.00th=[30802], 00:09:32.278 | 99.00th=[39584], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:09:32.278 | 99.99th=[50594] 00:09:32.278 bw ( KiB/s): min=14704, max=18064, per=17.24%, avg=16384.00, stdev=2375.88, samples=2 00:09:32.278 iops : min= 3676, max= 4516, avg=4096.00, stdev=593.97, samples=2 00:09:32.278 lat (usec) : 750=0.19%, 1000=0.22% 00:09:32.278 lat (msec) : 2=0.61%, 4=1.32%, 10=15.66%, 20=51.37%, 50=30.54% 00:09:32.278 lat (msec) : 100=0.10% 00:09:32.278 cpu : usr=2.88%, sys=5.57%, ctx=340, majf=0, minf=2 00:09:32.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:32.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.278 issued rwts: total=3786,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.278 00:09:32.278 Run status group 0 (all jobs): 00:09:32.278 READ: bw=88.4MiB/s (92.7MB/s), 14.0MiB/s-34.3MiB/s (14.7MB/s-35.9MB/s), io=89.3MiB (93.6MB), run=1002-1010msec 00:09:32.278 WRITE: bw=92.8MiB/s (97.3MB/s), 15.2MiB/s-35.7MiB/s (16.0MB/s-37.5MB/s), io=93.7MiB (98.3MB), run=1002-1010msec 00:09:32.278 00:09:32.278 Disk stats (read/write): 00:09:32.278 nvme0n1: ios=5682/6111, merge=0/0, ticks=40178/43373, in_queue=83551, util=94.49% 00:09:32.278 nvme0n2: ios=7417/7680, merge=0/0, ticks=51477/43502, in_queue=94979, util=99.08% 00:09:32.278 nvme0n3: ios=2606/3071, merge=0/0, ticks=24283/30720, in_queue=55003, util=90.86% 00:09:32.278 nvme0n4: ios=2697/3072, merge=0/0, ticks=42730/59255, in_queue=101985, util=89.44% 00:09:32.278 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:32.278 [global] 00:09:32.278 thread=1 00:09:32.278 invalidate=1 00:09:32.278 rw=randwrite 00:09:32.278 time_based=1 00:09:32.278 runtime=1 00:09:32.278 ioengine=libaio 00:09:32.278 direct=1 00:09:32.278 bs=4096 00:09:32.279 iodepth=128 00:09:32.279 norandommap=0 00:09:32.279 numjobs=1 00:09:32.279 00:09:32.279 verify_dump=1 00:09:32.279 verify_backlog=512 00:09:32.279 verify_state_save=0 00:09:32.279 do_verify=1 00:09:32.279 verify=crc32c-intel 00:09:32.279 [job0] 00:09:32.279 filename=/dev/nvme0n1 00:09:32.279 [job1] 00:09:32.279 filename=/dev/nvme0n2 00:09:32.279 [job2] 00:09:32.279 filename=/dev/nvme0n3 00:09:32.279 [job3] 00:09:32.279 filename=/dev/nvme0n4 00:09:32.279 Could not set queue depth (nvme0n1) 00:09:32.279 Could not set queue depth (nvme0n2) 00:09:32.279 Could not set queue depth (nvme0n3) 00:09:32.279 Could not set queue depth (nvme0n4) 00:09:32.539 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.539 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.539 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.539 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.539 fio-3.35 00:09:32.539 Starting 4 threads 00:09:33.942 00:09:33.943 job0: (groupid=0, jobs=1): err= 0: pid=3276809: Fri Dec 6 11:08:39 2024 00:09:33.943 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:09:33.943 slat (nsec): min=885, max=17109k, avg=86317.01, stdev=718949.47 00:09:33.943 clat (usec): min=1475, max=76952, avg=10989.50, stdev=8409.54 00:09:33.943 lat (usec): min=1482, max=76960, avg=11075.81, stdev=8496.60 00:09:33.943 clat percentiles (usec): 00:09:33.943 | 1.00th=[ 2008], 5.00th=[ 4817], 10.00th=[ 5604], 20.00th=[ 6456], 00:09:33.943 | 30.00th=[ 7177], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 9503], 00:09:33.943 | 70.00th=[10421], 80.00th=[12125], 90.00th=[20317], 95.00th=[26870], 00:09:33.943 | 99.00th=[42730], 99.50th=[61604], 99.90th=[72877], 99.95th=[72877], 00:09:33.943 | 99.99th=[77071] 00:09:33.943 write: IOPS=5757, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1004msec); 0 zone resets 00:09:33.943 slat (nsec): min=1500, max=19286k, avg=78703.93, stdev=638402.08 00:09:33.943 clat (usec): min=890, max=83702, avg=11302.00, stdev=10808.78 00:09:33.943 lat (usec): min=897, max=83710, avg=11380.71, stdev=10880.55 00:09:33.943 clat percentiles (usec): 00:09:33.943 | 1.00th=[ 1483], 5.00th=[ 2999], 10.00th=[ 3818], 20.00th=[ 5145], 00:09:33.943 | 30.00th=[ 5932], 40.00th=[ 6849], 50.00th=[ 7767], 60.00th=[ 8717], 00:09:33.943 | 70.00th=[11338], 80.00th=[13960], 90.00th=[21627], 95.00th=[35914], 00:09:33.943 | 99.00th=[62653], 99.50th=[66847], 99.90th=[72877], 99.95th=[72877], 00:09:33.943 | 99.99th=[83362] 00:09:33.943 bw ( KiB/s): min=17928, max=27432, per=29.45%, avg=22680.00, stdev=6720.34, samples=2 00:09:33.943 iops : min= 4482, max= 6858, avg=5670.00, stdev=1680.09, samples=2 00:09:33.943 lat (usec) : 1000=0.03% 00:09:33.943 lat (msec) : 2=1.81%, 4=6.09%, 10=55.90%, 20=24.31%, 50=10.71% 00:09:33.943 lat (msec) : 100=1.16% 00:09:33.943 cpu : usr=2.89%, sys=7.68%, ctx=476, majf=0, minf=1 00:09:33.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:33.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.943 issued rwts: total=5632,5781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.943 job1: (groupid=0, jobs=1): err= 0: pid=3276811: Fri Dec 6 11:08:39 2024 00:09:33.943 read: IOPS=4431, BW=17.3MiB/s (18.2MB/s)(17.4MiB/1006msec) 00:09:33.943 slat (nsec): min=955, max=9484.3k, avg=108322.56, stdev=689864.50 00:09:33.943 clat (usec): min=1184, max=40070, avg=13451.15, stdev=4239.01 00:09:33.943 lat (usec): min=4473, max=40078, avg=13559.47, stdev=4293.22 00:09:33.943 clat percentiles (usec): 00:09:33.943 | 1.00th=[ 7046], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10421], 00:09:33.943 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12256], 60.00th=[13304], 00:09:33.943 | 70.00th=[14615], 80.00th=[16057], 90.00th=[20055], 95.00th=[21103], 00:09:33.943 | 99.00th=[28443], 99.50th=[32375], 99.90th=[40109], 99.95th=[40109], 00:09:33.943 | 99.99th=[40109] 00:09:33.943 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:09:33.943 slat (nsec): min=1614, max=10925k, avg=102880.08, stdev=567326.64 00:09:33.943 clat (usec): min=1997, max=53555, avg=14586.94, stdev=8897.00 00:09:33.943 lat (usec): min=2004, max=53557, avg=14689.82, stdev=8942.14 00:09:33.943 clat percentiles (usec): 00:09:33.943 | 1.00th=[ 3261], 5.00th=[ 6259], 10.00th=[ 7373], 20.00th=[ 8586], 00:09:33.943 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11207], 60.00th=[12256], 00:09:33.943 | 70.00th=[13698], 80.00th=[18482], 90.00th=[30540], 95.00th=[34341], 00:09:33.943 | 99.00th=[46924], 99.50th=[48497], 99.90th=[53740], 99.95th=[53740], 00:09:33.943 | 99.99th=[53740] 00:09:33.943 bw ( KiB/s): min=16384, max=20480, per=23.94%, avg=18432.00, stdev=2896.31, samples=2 00:09:33.943 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:33.943 lat (msec) : 2=0.04%, 4=1.00%, 10=19.07%, 20=65.12%, 50=14.60% 00:09:33.943 lat (msec) : 100=0.15% 00:09:33.943 cpu : usr=3.88%, sys=4.38%, ctx=370, majf=0, minf=1 00:09:33.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:33.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.943 issued rwts: total=4458,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.943 job2: (groupid=0, jobs=1): err= 0: pid=3276832: Fri Dec 6 11:08:39 2024 00:09:33.943 read: IOPS=4737, BW=18.5MiB/s (19.4MB/s)(19.3MiB/1045msec) 00:09:33.943 slat (nsec): min=933, max=18871k, avg=114466.06, stdev=817822.75 00:09:33.943 clat (usec): min=3574, max=71791, avg=15118.89, stdev=11717.48 00:09:33.943 lat (usec): min=4124, max=71796, avg=15233.36, stdev=11791.36 00:09:33.943 clat percentiles (usec): 00:09:33.943 | 1.00th=[ 5276], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:09:33.943 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 9896], 60.00th=[11076], 00:09:33.943 | 70.00th=[17433], 80.00th=[21365], 90.00th=[31065], 95.00th=[41681], 00:09:33.943 | 99.00th=[61080], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:09:33.943 | 99.99th=[71828] 00:09:33.943 write: IOPS=4899, BW=19.1MiB/s (20.1MB/s)(20.0MiB/1045msec); 0 zone resets 00:09:33.943 slat (nsec): min=1508, max=15783k, avg=79027.21, stdev=574229.43 00:09:33.943 clat (usec): min=2436, max=41697, avg=11229.07, stdev=6620.91 00:09:33.943 lat (usec): min=2450, max=41706, avg=11308.09, stdev=6666.71 00:09:33.943 clat percentiles (usec): 00:09:33.943 | 1.00th=[ 4178], 5.00th=[ 5276], 10.00th=[ 6063], 20.00th=[ 6521], 00:09:33.943 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 9110], 00:09:33.943 | 70.00th=[11600], 80.00th=[18220], 90.00th=[20317], 95.00th=[26870], 00:09:33.943 | 99.00th=[31589], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:09:33.943 | 99.99th=[41681] 00:09:33.943 bw ( KiB/s): min=14728, max=26232, per=26.60%, avg=20480.00, stdev=8134.56, samples=2 00:09:33.943 iops : min= 3682, max= 6558, avg=5120.00, stdev=2033.64, samples=2 00:09:33.943 lat (msec) : 4=0.35%, 10=55.89%, 20=25.35%, 50=17.48%, 100=0.93% 00:09:33.943 cpu : usr=3.35%, sys=5.36%, ctx=485, majf=0, minf=1 00:09:33.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:33.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.943 issued rwts: total=4951,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.943 job3: (groupid=0, jobs=1): err= 0: pid=3276839: Fri Dec 6 11:08:39 2024 00:09:33.943 read: IOPS=4494, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1004msec) 00:09:33.943 slat (nsec): min=926, max=44359k, avg=113643.63, stdev=985709.74 00:09:33.943 clat (usec): min=1265, max=64056, avg=15077.74, stdev=14421.21 00:09:33.943 lat (usec): min=2703, max=64062, avg=15191.39, stdev=14502.61 00:09:33.943 clat percentiles (usec): 00:09:33.943 | 1.00th=[ 3097], 5.00th=[ 5342], 10.00th=[ 6194], 20.00th=[ 7177], 00:09:33.943 | 30.00th=[ 7635], 40.00th=[ 8094], 50.00th=[ 9241], 60.00th=[10028], 00:09:33.943 | 70.00th=[11076], 80.00th=[19006], 90.00th=[37487], 95.00th=[54789], 00:09:33.943 | 99.00th=[62129], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:09:33.943 | 99.99th=[64226] 00:09:33.943 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:09:33.943 slat (nsec): min=1551, max=18955k, avg=97179.90, stdev=752702.23 00:09:33.943 clat (usec): min=1166, max=59641, avg=12663.11, stdev=10143.10 00:09:33.943 lat (usec): min=1174, max=62306, avg=12760.29, stdev=10202.96 00:09:33.943 clat percentiles (usec): 00:09:33.943 | 1.00th=[ 2212], 5.00th=[ 3916], 10.00th=[ 4752], 20.00th=[ 6194], 00:09:33.943 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 9634], 60.00th=[10290], 00:09:33.943 | 70.00th=[13829], 80.00th=[17695], 90.00th=[22938], 95.00th=[38011], 00:09:33.943 | 99.00th=[53216], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:09:33.943 | 99.99th=[59507] 00:09:33.943 bw ( KiB/s): min=12288, max=24576, per=23.94%, avg=18432.00, stdev=8688.93, samples=2 00:09:33.943 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:09:33.943 lat (msec) : 2=0.32%, 4=3.84%, 10=52.25%, 20=27.45%, 50=12.02% 00:09:33.943 lat (msec) : 100=4.13% 00:09:33.943 cpu : usr=2.79%, sys=5.78%, ctx=311, majf=0, minf=1 00:09:33.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:33.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.943 issued rwts: total=4512,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.943 00:09:33.943 Run status group 0 (all jobs): 00:09:33.943 READ: bw=73.1MiB/s (76.6MB/s), 17.3MiB/s-21.9MiB/s (18.2MB/s-23.0MB/s), io=76.4MiB (80.1MB), run=1004-1045msec 00:09:33.943 WRITE: bw=75.2MiB/s (78.8MB/s), 17.9MiB/s-22.5MiB/s (18.8MB/s-23.6MB/s), io=78.6MiB (82.4MB), run=1004-1045msec 00:09:33.943 00:09:33.943 Disk stats (read/write): 00:09:33.943 nvme0n1: ios=4146/4215, merge=0/0, ticks=35948/35027, in_queue=70975, util=86.57% 00:09:33.943 nvme0n2: ios=3624/4023, merge=0/0, ticks=30132/46242, in_queue=76374, util=99.39% 00:09:33.943 nvme0n3: ios=4647/4631, merge=0/0, ticks=28054/21034, in_queue=49088, util=94.82% 00:09:33.943 nvme0n4: ios=3072/3380, merge=0/0, ticks=22160/19285, in_queue=41445, util=88.77% 00:09:33.943 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:33.943 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3276978 00:09:33.943 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:33.943 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:33.943 [global] 00:09:33.943 thread=1 00:09:33.943 invalidate=1 00:09:33.943 rw=read 00:09:33.943 time_based=1 00:09:33.943 runtime=10 00:09:33.943 ioengine=libaio 00:09:33.943 direct=1 00:09:33.943 bs=4096 00:09:33.943 iodepth=1 00:09:33.943 norandommap=1 00:09:33.943 numjobs=1 00:09:33.943 00:09:33.943 [job0] 00:09:33.943 filename=/dev/nvme0n1 00:09:33.943 [job1] 00:09:33.943 filename=/dev/nvme0n2 00:09:33.943 [job2] 00:09:33.943 filename=/dev/nvme0n3 00:09:33.943 [job3] 00:09:33.943 filename=/dev/nvme0n4 00:09:33.944 Could not set queue depth (nvme0n1) 00:09:33.944 Could not set queue depth (nvme0n2) 00:09:33.944 Could not set queue depth (nvme0n3) 00:09:33.944 Could not set queue depth (nvme0n4) 00:09:34.206 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.206 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.206 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.206 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.206 fio-3.35 00:09:34.206 Starting 4 threads 00:09:36.808 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:36.809 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:36.809 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:09:36.809 fio: pid=3277340, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:37.068 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9641984, buflen=4096 00:09:37.068 fio: pid=3277339, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:37.068 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.068 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:37.328 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.328 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:37.328 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=12152832, buflen=4096 00:09:37.328 fio: pid=3277307, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:37.328 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.328 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:37.589 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4980736, buflen=4096 00:09:37.589 fio: pid=3277324, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:37.589 00:09:37.589 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3277307: Fri Dec 6 11:08:43 2024 00:09:37.589 read: IOPS=996, BW=3984KiB/s (4080kB/s)(11.6MiB/2979msec) 00:09:37.589 slat (usec): min=6, max=27244, avg=42.44, stdev=543.42 00:09:37.589 clat (usec): min=309, max=1537, avg=947.51, stdev=121.75 00:09:37.589 lat (usec): min=336, max=28306, avg=989.96, stdev=557.06 00:09:37.589 clat percentiles (usec): 00:09:37.589 | 1.00th=[ 545], 5.00th=[ 701], 10.00th=[ 775], 20.00th=[ 881], 00:09:37.589 | 30.00th=[ 922], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:09:37.589 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1090], 00:09:37.589 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1467], 99.95th=[ 1516], 00:09:37.589 | 99.99th=[ 1532] 00:09:37.589 bw ( KiB/s): min= 3912, max= 4120, per=47.99%, avg=3990.40, stdev=80.48, samples=5 00:09:37.589 iops : min= 978, max= 1030, avg=997.60, stdev=20.12, samples=5 00:09:37.589 lat (usec) : 500=0.30%, 750=7.72%, 1000=57.68% 00:09:37.589 lat (msec) : 2=34.27% 00:09:37.589 cpu : usr=2.08%, sys=3.66%, ctx=2972, majf=0, minf=1 00:09:37.589 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.589 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.589 issued rwts: total=2968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.589 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.589 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3277324: Fri Dec 6 11:08:43 2024 00:09:37.589 read: IOPS=383, BW=1532KiB/s (1569kB/s)(4864KiB/3175msec) 00:09:37.589 slat (usec): min=14, max=4707, avg=33.66, stdev=168.11 00:09:37.589 clat (usec): min=623, max=43049, avg=2552.53, stdev=7788.42 00:09:37.589 lat (usec): min=668, max=46076, avg=2586.20, stdev=7824.30 00:09:37.589 clat percentiles (usec): 00:09:37.589 | 1.00th=[ 791], 5.00th=[ 881], 10.00th=[ 914], 20.00th=[ 955], 00:09:37.589 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:09:37.589 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1221], 00:09:37.589 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:09:37.589 | 99.99th=[43254] 00:09:37.589 bw ( KiB/s): min= 96, max= 3896, per=19.22%, avg=1598.33, stdev=1768.97, samples=6 00:09:37.589 iops : min= 24, max= 974, avg=399.50, stdev=442.29, samples=6 00:09:37.589 lat (usec) : 750=0.41%, 1000=41.41% 00:09:37.589 lat (msec) : 2=54.31%, 50=3.78% 00:09:37.589 cpu : usr=0.41%, sys=1.83%, ctx=1219, majf=0, minf=2 00:09:37.589 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.589 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.589 issued rwts: total=1217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.589 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.589 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3277339: Fri Dec 6 11:08:43 2024 00:09:37.589 read: IOPS=844, BW=3379KiB/s (3460kB/s)(9416KiB/2787msec) 00:09:37.589 slat (nsec): min=6843, max=63284, avg=25906.60, stdev=2637.93 00:09:37.589 clat (usec): min=513, max=42929, avg=1141.81, stdev=2390.01 00:09:37.589 lat (usec): min=538, max=42954, avg=1167.72, stdev=2389.99 00:09:37.589 clat percentiles (usec): 00:09:37.589 | 1.00th=[ 783], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 930], 00:09:37.589 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1029], 00:09:37.589 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1156], 00:09:37.589 | 99.00th=[ 1221], 99.50th=[ 1270], 99.90th=[42206], 99.95th=[42206], 00:09:37.589 | 99.99th=[42730] 00:09:37.589 bw ( KiB/s): min= 1352, max= 3912, per=40.36%, avg=3356.80, stdev=1121.25, samples=5 00:09:37.589 iops : min= 338, max= 978, avg=839.20, stdev=280.31, samples=5 00:09:37.589 lat (usec) : 750=0.34%, 1000=46.07% 00:09:37.589 lat (msec) : 2=53.21%, 50=0.34% 00:09:37.590 cpu : usr=1.08%, sys=2.40%, ctx=2355, majf=0, minf=1 00:09:37.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.590 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.590 issued rwts: total=2355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.590 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3277340: Fri Dec 6 11:08:43 2024 00:09:37.590 read: IOPS=24, BW=95.3KiB/s (97.6kB/s)(252KiB/2644msec) 00:09:37.590 slat (nsec): min=25524, max=42444, avg=26134.09, stdev=2086.14 00:09:37.590 clat (usec): min=1203, max=43130, avg=41499.70, stdev=5178.06 00:09:37.590 lat (usec): min=1245, max=43156, avg=41525.84, stdev=5175.98 00:09:37.590 clat percentiles (usec): 00:09:37.590 | 1.00th=[ 1205], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:09:37.590 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:37.590 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:09:37.590 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:37.590 | 99.99th=[43254] 00:09:37.590 bw ( KiB/s): min= 96, max= 96, per=1.15%, avg=96.00, stdev= 0.00, samples=5 00:09:37.590 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:09:37.590 lat (msec) : 2=1.56%, 50=96.88% 00:09:37.590 cpu : usr=0.04%, sys=0.04%, ctx=65, majf=0, minf=2 00:09:37.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.590 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.590 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.590 00:09:37.590 Run status group 0 (all jobs): 00:09:37.590 READ: bw=8315KiB/s (8515kB/s), 95.3KiB/s-3984KiB/s (97.6kB/s-4080kB/s), io=25.8MiB (27.0MB), run=2644-3175msec 00:09:37.590 00:09:37.590 Disk stats (read/write): 00:09:37.590 nvme0n1: ios=2814/0, merge=0/0, ticks=2470/0, in_queue=2470, util=93.59% 00:09:37.590 nvme0n2: ios=1214/0, merge=0/0, ticks=2933/0, in_queue=2933, util=95.38% 00:09:37.590 nvme0n3: ios=2177/0, merge=0/0, ticks=2464/0, in_queue=2464, util=95.99% 00:09:37.590 nvme0n4: ios=62/0, merge=0/0, ticks=2574/0, in_queue=2574, util=96.42% 00:09:37.590 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.590 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:37.850 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.850 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:38.109 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:38.109 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:38.109 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:38.109 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3276978 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:38.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:38.368 nvmf hotplug test: fio failed as expected 00:09:38.368 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.629 rmmod nvme_tcp 00:09:38.629 rmmod nvme_fabrics 00:09:38.629 rmmod nvme_keyring 00:09:38.629 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3273460 ']' 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3273460 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3273460 ']' 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3273460 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3273460 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3273460' 00:09:38.888 killing process with pid 3273460 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3273460 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3273460 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.888 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.888 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.888 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.888 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.888 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.888 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:41.427 00:09:41.427 real 0m30.052s 00:09:41.427 user 2m35.489s 00:09:41.427 sys 0m10.209s 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.427 ************************************ 00:09:41.427 END TEST nvmf_fio_target 00:09:41.427 ************************************ 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.427 ************************************ 00:09:41.427 START TEST nvmf_bdevio 00:09:41.427 ************************************ 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:41.427 * Looking for test storage... 00:09:41.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:41.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.427 --rc genhtml_branch_coverage=1 00:09:41.427 --rc genhtml_function_coverage=1 00:09:41.427 --rc genhtml_legend=1 00:09:41.427 --rc geninfo_all_blocks=1 00:09:41.427 --rc geninfo_unexecuted_blocks=1 00:09:41.427 00:09:41.427 ' 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:41.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.427 --rc genhtml_branch_coverage=1 00:09:41.427 --rc genhtml_function_coverage=1 00:09:41.427 --rc genhtml_legend=1 00:09:41.427 --rc geninfo_all_blocks=1 00:09:41.427 --rc geninfo_unexecuted_blocks=1 00:09:41.427 00:09:41.427 ' 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:41.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.427 --rc genhtml_branch_coverage=1 00:09:41.427 --rc genhtml_function_coverage=1 00:09:41.427 --rc genhtml_legend=1 00:09:41.427 --rc geninfo_all_blocks=1 00:09:41.427 --rc geninfo_unexecuted_blocks=1 00:09:41.427 00:09:41.427 ' 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:41.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.427 --rc genhtml_branch_coverage=1 00:09:41.427 --rc genhtml_function_coverage=1 00:09:41.427 --rc genhtml_legend=1 00:09:41.427 --rc geninfo_all_blocks=1 00:09:41.427 --rc geninfo_unexecuted_blocks=1 00:09:41.427 00:09:41.427 ' 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.427 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:41.428 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:49.697 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:49.697 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:49.697 Found net devices under 0000:31:00.0: cvl_0_0 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:49.697 Found net devices under 0000:31:00.1: cvl_0_1 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.697 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:09:49.698 00:09:49.698 --- 10.0.0.2 ping statistics --- 00:09:49.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.698 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:09:49.698 00:09:49.698 --- 10.0.0.1 ping statistics --- 00:09:49.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.698 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.698 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3283065 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3283065 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3283065 ']' 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.976 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:49.976 [2024-12-06 11:08:55.948821] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:09:49.976 [2024-12-06 11:08:55.948895] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.976 [2024-12-06 11:08:56.059648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.976 [2024-12-06 11:08:56.110565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.976 [2024-12-06 11:08:56.110620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.976 [2024-12-06 11:08:56.110628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.976 [2024-12-06 11:08:56.110636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.976 [2024-12-06 11:08:56.110642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.976 [2024-12-06 11:08:56.112715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:49.976 [2024-12-06 11:08:56.112890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:49.976 [2024-12-06 11:08:56.113049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:49.976 [2024-12-06 11:08:56.113149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.921 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.921 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:50.921 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:50.922 [2024-12-06 11:08:56.830047] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:50.922 Malloc0 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:50.922 [2024-12-06 11:08:56.914532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.922 { 00:09:50.922 "params": { 00:09:50.922 "name": "Nvme$subsystem", 00:09:50.922 "trtype": "$TEST_TRANSPORT", 00:09:50.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.922 "adrfam": "ipv4", 00:09:50.922 "trsvcid": "$NVMF_PORT", 00:09:50.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.922 "hdgst": ${hdgst:-false}, 00:09:50.922 "ddgst": ${ddgst:-false} 00:09:50.922 }, 00:09:50.922 "method": "bdev_nvme_attach_controller" 00:09:50.922 } 00:09:50.922 EOF 00:09:50.922 )") 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:50.922 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.922 "params": { 00:09:50.922 "name": "Nvme1", 00:09:50.922 "trtype": "tcp", 00:09:50.922 "traddr": "10.0.0.2", 00:09:50.922 "adrfam": "ipv4", 00:09:50.922 "trsvcid": "4420", 00:09:50.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.922 "hdgst": false, 00:09:50.922 "ddgst": false 00:09:50.922 }, 00:09:50.922 "method": "bdev_nvme_attach_controller" 00:09:50.922 }' 00:09:50.922 [2024-12-06 11:08:56.981626] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:09:50.922 [2024-12-06 11:08:56.981723] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283247 ] 00:09:50.922 [2024-12-06 11:08:57.067749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:51.184 [2024-12-06 11:08:57.111886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.184 [2024-12-06 11:08:57.111966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.184 [2024-12-06 11:08:57.111969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.184 I/O targets: 00:09:51.184 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:51.184 00:09:51.184 00:09:51.184 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.184 http://cunit.sourceforge.net/ 00:09:51.184 00:09:51.184 00:09:51.184 Suite: bdevio tests on: Nvme1n1 00:09:51.184 Test: blockdev write read block ...passed 00:09:51.184 Test: blockdev write zeroes read block ...passed 00:09:51.445 Test: blockdev write zeroes read no split ...passed 00:09:51.445 Test: blockdev write zeroes read split ...passed 00:09:51.445 Test: blockdev write zeroes read split partial ...passed 00:09:51.445 Test: blockdev reset ...[2024-12-06 11:08:57.414178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:51.445 [2024-12-06 11:08:57.414241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147d0e0 (9): Bad file descriptor 00:09:51.445 [2024-12-06 11:08:57.444242] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:51.445 passed 00:09:51.445 Test: blockdev write read 8 blocks ...passed 00:09:51.445 Test: blockdev write read size > 128k ...passed 00:09:51.445 Test: blockdev write read invalid size ...passed 00:09:51.445 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:51.445 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:51.445 Test: blockdev write read max offset ...passed 00:09:51.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:51.705 Test: blockdev writev readv 8 blocks ...passed 00:09:51.705 Test: blockdev writev readv 30 x 1block ...passed 00:09:51.705 Test: blockdev writev readv block ...passed 00:09:51.705 Test: blockdev writev readv size > 128k ...passed 00:09:51.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:51.705 Test: blockdev comparev and writev ...[2024-12-06 11:08:57.752471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:51.705 [2024-12-06 11:08:57.752500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:51.705 [2024-12-06 11:08:57.752512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:51.705 [2024-12-06 11:08:57.752518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:51.705 [2024-12-06 11:08:57.753032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:51.705 [2024-12-06 11:08:57.753042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:51.705 [2024-12-06 11:08:57.753052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:51.705 [2024-12-06 11:08:57.753058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:51.705 [2024-12-06 11:08:57.753544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:51.705 [2024-12-06 11:08:57.753552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:51.705 [2024-12-06 11:08:57.753562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:51.705 [2024-12-06 11:08:57.753568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:51.705 [2024-12-06 11:08:57.754099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:51.705 [2024-12-06 11:08:57.754109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:51.705 [2024-12-06 11:08:57.754118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:51.705 [2024-12-06 11:08:57.754124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:51.705 passed 00:09:51.705 Test: blockdev nvme passthru rw ...passed 00:09:51.705 Test: blockdev nvme passthru vendor specific ...[2024-12-06 11:08:57.838778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:51.705 [2024-12-06 11:08:57.838790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:51.705 [2024-12-06 11:08:57.839165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:51.705 [2024-12-06 11:08:57.839175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:51.705 [2024-12-06 11:08:57.839505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:51.705 [2024-12-06 11:08:57.839514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:51.705 [2024-12-06 11:08:57.839824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:51.705 [2024-12-06 11:08:57.839832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:51.705 passed 00:09:51.705 Test: blockdev nvme admin passthru ...passed 00:09:51.966 Test: blockdev copy ...passed 00:09:51.966 00:09:51.966 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.966 suites 1 1 n/a 0 0 00:09:51.966 tests 23 23 23 0 0 00:09:51.966 asserts 152 152 152 0 n/a 00:09:51.966 00:09:51.966 Elapsed time = 1.291 seconds 00:09:51.966 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.966 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.966 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.966 rmmod nvme_tcp 00:09:51.966 rmmod nvme_fabrics 00:09:51.966 rmmod nvme_keyring 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3283065 ']' 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3283065 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3283065 ']' 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3283065 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.966 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3283065 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3283065' 00:09:52.226 killing process with pid 3283065 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3283065 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3283065 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.226 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.769 11:09:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.769 00:09:54.769 real 0m13.211s 00:09:54.769 user 0m13.242s 00:09:54.769 sys 0m6.976s 00:09:54.769 11:09:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.769 11:09:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.769 ************************************ 00:09:54.769 END TEST nvmf_bdevio 00:09:54.769 ************************************ 00:09:54.769 11:09:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:54.769 00:09:54.769 real 5m13.508s 00:09:54.769 user 11m43.717s 00:09:54.769 sys 1m56.709s 00:09:54.769 11:09:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.769 11:09:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.769 ************************************ 00:09:54.769 END TEST nvmf_target_core 00:09:54.769 ************************************ 00:09:54.769 11:09:00 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:54.769 11:09:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.769 11:09:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.769 11:09:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:54.769 ************************************ 00:09:54.769 START TEST nvmf_target_extra 00:09:54.769 ************************************ 00:09:54.769 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:54.769 * Looking for test storage... 00:09:54.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.770 --rc genhtml_branch_coverage=1 00:09:54.770 --rc genhtml_function_coverage=1 00:09:54.770 --rc genhtml_legend=1 00:09:54.770 --rc geninfo_all_blocks=1 00:09:54.770 --rc geninfo_unexecuted_blocks=1 00:09:54.770 00:09:54.770 ' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.770 --rc genhtml_branch_coverage=1 00:09:54.770 --rc genhtml_function_coverage=1 00:09:54.770 --rc genhtml_legend=1 00:09:54.770 --rc geninfo_all_blocks=1 00:09:54.770 --rc geninfo_unexecuted_blocks=1 00:09:54.770 00:09:54.770 ' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.770 --rc genhtml_branch_coverage=1 00:09:54.770 --rc genhtml_function_coverage=1 00:09:54.770 --rc genhtml_legend=1 00:09:54.770 --rc geninfo_all_blocks=1 00:09:54.770 --rc geninfo_unexecuted_blocks=1 00:09:54.770 00:09:54.770 ' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.770 --rc genhtml_branch_coverage=1 00:09:54.770 --rc genhtml_function_coverage=1 00:09:54.770 --rc genhtml_legend=1 00:09:54.770 --rc geninfo_all_blocks=1 00:09:54.770 --rc geninfo_unexecuted_blocks=1 00:09:54.770 00:09:54.770 ' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:54.770 ************************************ 00:09:54.770 START TEST nvmf_example 00:09:54.770 ************************************ 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:54.770 * Looking for test storage... 00:09:54.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.770 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.032 --rc genhtml_branch_coverage=1 00:09:55.032 --rc genhtml_function_coverage=1 00:09:55.032 --rc genhtml_legend=1 00:09:55.032 --rc geninfo_all_blocks=1 00:09:55.032 --rc geninfo_unexecuted_blocks=1 00:09:55.032 00:09:55.032 ' 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.032 --rc genhtml_branch_coverage=1 00:09:55.032 --rc genhtml_function_coverage=1 00:09:55.032 --rc genhtml_legend=1 00:09:55.032 --rc geninfo_all_blocks=1 00:09:55.032 --rc geninfo_unexecuted_blocks=1 00:09:55.032 00:09:55.032 ' 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.032 --rc genhtml_branch_coverage=1 00:09:55.032 --rc genhtml_function_coverage=1 00:09:55.032 --rc genhtml_legend=1 00:09:55.032 --rc geninfo_all_blocks=1 00:09:55.032 --rc geninfo_unexecuted_blocks=1 00:09:55.032 00:09:55.032 ' 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.032 --rc genhtml_branch_coverage=1 00:09:55.032 --rc genhtml_function_coverage=1 00:09:55.032 --rc genhtml_legend=1 00:09:55.032 --rc geninfo_all_blocks=1 00:09:55.032 --rc geninfo_unexecuted_blocks=1 00:09:55.032 00:09:55.032 ' 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.032 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.033 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.033 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.033 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.033 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.033 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.033 11:09:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.033 11:09:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:03.166 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:03.166 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:03.166 Found net devices under 0000:31:00.0: cvl_0_0 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:03.166 Found net devices under 0000:31:00.1: cvl_0_1 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.166 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.167 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:10:03.428 00:10:03.428 --- 10.0.0.2 ping statistics --- 00:10:03.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.428 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:10:03.428 00:10:03.428 --- 10.0.0.1 ping statistics --- 00:10:03.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.428 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3288756 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3288756 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3288756 ']' 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.428 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:04.369 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:16.593 Initializing NVMe Controllers 00:10:16.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:16.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:16.593 Initialization complete. Launching workers. 00:10:16.593 ======================================================== 00:10:16.593 Latency(us) 00:10:16.593 Device Information : IOPS MiB/s Average min max 00:10:16.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18512.45 72.31 3456.64 687.27 17116.95 00:10:16.593 ======================================================== 00:10:16.593 Total : 18512.45 72.31 3456.64 687.27 17116.95 00:10:16.593 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.593 rmmod nvme_tcp 00:10:16.593 rmmod nvme_fabrics 00:10:16.593 rmmod nvme_keyring 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3288756 ']' 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3288756 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3288756 ']' 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3288756 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3288756 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3288756' 00:10:16.593 killing process with pid 3288756 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3288756 00:10:16.593 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3288756 00:10:16.593 nvmf threads initialize successfully 00:10:16.593 bdev subsystem init successfully 00:10:16.593 created a nvmf target service 00:10:16.593 create targets's poll groups done 00:10:16.594 all subsystems of target started 00:10:16.594 nvmf target is running 00:10:16.594 all subsystems of target stopped 00:10:16.594 destroy targets's poll groups done 00:10:16.594 destroyed the nvmf target service 00:10:16.594 bdev subsystem finish successfully 00:10:16.594 nvmf threads destroy successfully 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.594 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.164 00:10:17.164 real 0m22.311s 00:10:17.164 user 0m46.647s 00:10:17.164 sys 0m7.682s 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.164 ************************************ 00:10:17.164 END TEST nvmf_example 00:10:17.164 ************************************ 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:17.164 ************************************ 00:10:17.164 START TEST nvmf_filesystem 00:10:17.164 ************************************ 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:17.164 * Looking for test storage... 00:10:17.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.164 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.429 --rc genhtml_branch_coverage=1 00:10:17.429 --rc genhtml_function_coverage=1 00:10:17.429 --rc genhtml_legend=1 00:10:17.429 --rc geninfo_all_blocks=1 00:10:17.429 --rc geninfo_unexecuted_blocks=1 00:10:17.429 00:10:17.429 ' 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.429 --rc genhtml_branch_coverage=1 00:10:17.429 --rc genhtml_function_coverage=1 00:10:17.429 --rc genhtml_legend=1 00:10:17.429 --rc geninfo_all_blocks=1 00:10:17.429 --rc geninfo_unexecuted_blocks=1 00:10:17.429 00:10:17.429 ' 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.429 --rc genhtml_branch_coverage=1 00:10:17.429 --rc genhtml_function_coverage=1 00:10:17.429 --rc genhtml_legend=1 00:10:17.429 --rc geninfo_all_blocks=1 00:10:17.429 --rc geninfo_unexecuted_blocks=1 00:10:17.429 00:10:17.429 ' 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.429 --rc genhtml_branch_coverage=1 00:10:17.429 --rc genhtml_function_coverage=1 00:10:17.429 --rc genhtml_legend=1 00:10:17.429 --rc geninfo_all_blocks=1 00:10:17.429 --rc geninfo_unexecuted_blocks=1 00:10:17.429 00:10:17.429 ' 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:17.429 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:17.430 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:17.430 #define SPDK_CONFIG_H 00:10:17.430 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:17.430 #define SPDK_CONFIG_APPS 1 00:10:17.430 #define SPDK_CONFIG_ARCH native 00:10:17.430 #undef SPDK_CONFIG_ASAN 00:10:17.430 #undef SPDK_CONFIG_AVAHI 00:10:17.430 #undef SPDK_CONFIG_CET 00:10:17.430 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:17.430 #define SPDK_CONFIG_COVERAGE 1 00:10:17.430 #define SPDK_CONFIG_CROSS_PREFIX 00:10:17.430 #undef SPDK_CONFIG_CRYPTO 00:10:17.430 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:17.430 #undef SPDK_CONFIG_CUSTOMOCF 00:10:17.430 #undef SPDK_CONFIG_DAOS 00:10:17.430 #define SPDK_CONFIG_DAOS_DIR 00:10:17.430 #define SPDK_CONFIG_DEBUG 1 00:10:17.430 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:17.430 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:17.430 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:17.430 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:17.430 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:17.430 #undef SPDK_CONFIG_DPDK_UADK 00:10:17.430 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:17.430 #define SPDK_CONFIG_EXAMPLES 1 00:10:17.430 #undef SPDK_CONFIG_FC 00:10:17.430 #define SPDK_CONFIG_FC_PATH 00:10:17.430 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:17.430 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:17.430 #define SPDK_CONFIG_FSDEV 1 00:10:17.430 #undef SPDK_CONFIG_FUSE 00:10:17.430 #undef SPDK_CONFIG_FUZZER 00:10:17.430 #define SPDK_CONFIG_FUZZER_LIB 00:10:17.430 #undef SPDK_CONFIG_GOLANG 00:10:17.430 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:17.430 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:17.430 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:17.430 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:17.430 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:17.430 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:17.430 #undef SPDK_CONFIG_HAVE_LZ4 00:10:17.430 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:17.430 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:17.430 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:17.430 #define SPDK_CONFIG_IDXD 1 00:10:17.430 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:17.430 #undef SPDK_CONFIG_IPSEC_MB 00:10:17.430 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:17.430 #define SPDK_CONFIG_ISAL 1 00:10:17.430 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:17.430 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:17.430 #define SPDK_CONFIG_LIBDIR 00:10:17.430 #undef SPDK_CONFIG_LTO 00:10:17.430 #define SPDK_CONFIG_MAX_LCORES 128 00:10:17.430 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:17.430 #define SPDK_CONFIG_NVME_CUSE 1 00:10:17.430 #undef SPDK_CONFIG_OCF 00:10:17.430 #define SPDK_CONFIG_OCF_PATH 00:10:17.430 #define SPDK_CONFIG_OPENSSL_PATH 00:10:17.430 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:17.430 #define SPDK_CONFIG_PGO_DIR 00:10:17.430 #undef SPDK_CONFIG_PGO_USE 00:10:17.430 #define SPDK_CONFIG_PREFIX /usr/local 00:10:17.430 #undef SPDK_CONFIG_RAID5F 00:10:17.430 #undef SPDK_CONFIG_RBD 00:10:17.430 #define SPDK_CONFIG_RDMA 1 00:10:17.431 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:17.431 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:17.431 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:17.431 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:17.431 #define SPDK_CONFIG_SHARED 1 00:10:17.431 #undef SPDK_CONFIG_SMA 00:10:17.431 #define SPDK_CONFIG_TESTS 1 00:10:17.431 #undef SPDK_CONFIG_TSAN 00:10:17.431 #define SPDK_CONFIG_UBLK 1 00:10:17.431 #define SPDK_CONFIG_UBSAN 1 00:10:17.431 #undef SPDK_CONFIG_UNIT_TESTS 00:10:17.431 #undef SPDK_CONFIG_URING 00:10:17.431 #define SPDK_CONFIG_URING_PATH 00:10:17.431 #undef SPDK_CONFIG_URING_ZNS 00:10:17.431 #undef SPDK_CONFIG_USDT 00:10:17.431 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:17.431 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:17.431 #define SPDK_CONFIG_VFIO_USER 1 00:10:17.431 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:17.431 #define SPDK_CONFIG_VHOST 1 00:10:17.431 #define SPDK_CONFIG_VIRTIO 1 00:10:17.431 #undef SPDK_CONFIG_VTUNE 00:10:17.431 #define SPDK_CONFIG_VTUNE_DIR 00:10:17.431 #define SPDK_CONFIG_WERROR 1 00:10:17.431 #define SPDK_CONFIG_WPDK_DIR 00:10:17.431 #undef SPDK_CONFIG_XNVME 00:10:17.431 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:17.431 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:17.432 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3291691 ]] 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3291691 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.h6zTCZ 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.h6zTCZ/tests/target /tmp/spdk.h6zTCZ 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:17.433 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122174480384 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356550144 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7182069760 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666906624 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678273024 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847689216 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871310848 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23621632 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=175104 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=328704 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677556224 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678277120 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=720896 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:17.434 * Looking for test storage... 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122174480384 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9396662272 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.434 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.695 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.695 --rc genhtml_branch_coverage=1 00:10:17.695 --rc genhtml_function_coverage=1 00:10:17.695 --rc genhtml_legend=1 00:10:17.696 --rc geninfo_all_blocks=1 00:10:17.696 --rc geninfo_unexecuted_blocks=1 00:10:17.696 00:10:17.696 ' 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.696 --rc genhtml_branch_coverage=1 00:10:17.696 --rc genhtml_function_coverage=1 00:10:17.696 --rc genhtml_legend=1 00:10:17.696 --rc geninfo_all_blocks=1 00:10:17.696 --rc geninfo_unexecuted_blocks=1 00:10:17.696 00:10:17.696 ' 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.696 --rc genhtml_branch_coverage=1 00:10:17.696 --rc genhtml_function_coverage=1 00:10:17.696 --rc genhtml_legend=1 00:10:17.696 --rc geninfo_all_blocks=1 00:10:17.696 --rc geninfo_unexecuted_blocks=1 00:10:17.696 00:10:17.696 ' 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.696 --rc genhtml_branch_coverage=1 00:10:17.696 --rc genhtml_function_coverage=1 00:10:17.696 --rc genhtml_legend=1 00:10:17.696 --rc geninfo_all_blocks=1 00:10:17.696 --rc geninfo_unexecuted_blocks=1 00:10:17.696 00:10:17.696 ' 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.696 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.835 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:25.835 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:25.836 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:25.836 Found net devices under 0000:31:00.0: cvl_0_0 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:25.836 Found net devices under 0000:31:00.1: cvl_0_1 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:25.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:10:25.836 00:10:25.836 --- 10.0.0.2 ping statistics --- 00:10:25.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.836 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:10:25.836 11:09:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:10:26.097 00:10:26.097 --- 10.0.0.1 ping statistics --- 00:10:26.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.097 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.097 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:26.097 ************************************ 00:10:26.097 START TEST nvmf_filesystem_no_in_capsule 00:10:26.097 ************************************ 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3296027 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3296027 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3296027 ']' 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.098 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.098 [2024-12-06 11:09:32.162611] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:10:26.098 [2024-12-06 11:09:32.162677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.098 [2024-12-06 11:09:32.254827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.359 [2024-12-06 11:09:32.297301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.359 [2024-12-06 11:09:32.297338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.359 [2024-12-06 11:09:32.297346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.359 [2024-12-06 11:09:32.297353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.359 [2024-12-06 11:09:32.297359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.359 [2024-12-06 11:09:32.298909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.359 [2024-12-06 11:09:32.299127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.359 [2024-12-06 11:09:32.299128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.359 [2024-12-06 11:09:32.298981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.929 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.929 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:26.929 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:26.929 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:26.929 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.929 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.929 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:26.929 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:26.929 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.929 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.929 [2024-12-06 11:09:33.019049] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.929 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.929 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:26.929 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.929 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.190 Malloc1 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.190 [2024-12-06 11:09:33.162829] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:27.190 { 00:10:27.190 "name": "Malloc1", 00:10:27.190 "aliases": [ 00:10:27.190 "97ec7929-1e7f-43c8-9114-c6d2bfe92f01" 00:10:27.190 ], 00:10:27.190 "product_name": "Malloc disk", 00:10:27.190 "block_size": 512, 00:10:27.190 "num_blocks": 1048576, 00:10:27.190 "uuid": "97ec7929-1e7f-43c8-9114-c6d2bfe92f01", 00:10:27.190 "assigned_rate_limits": { 00:10:27.190 "rw_ios_per_sec": 0, 00:10:27.190 "rw_mbytes_per_sec": 0, 00:10:27.190 "r_mbytes_per_sec": 0, 00:10:27.190 "w_mbytes_per_sec": 0 00:10:27.190 }, 00:10:27.190 "claimed": true, 00:10:27.190 "claim_type": "exclusive_write", 00:10:27.190 "zoned": false, 00:10:27.190 "supported_io_types": { 00:10:27.190 "read": true, 00:10:27.190 "write": true, 00:10:27.190 "unmap": true, 00:10:27.190 "flush": true, 00:10:27.190 "reset": true, 00:10:27.190 "nvme_admin": false, 00:10:27.190 "nvme_io": false, 00:10:27.190 "nvme_io_md": false, 00:10:27.190 "write_zeroes": true, 00:10:27.190 "zcopy": true, 00:10:27.190 "get_zone_info": false, 00:10:27.190 "zone_management": false, 00:10:27.190 "zone_append": false, 00:10:27.190 "compare": false, 00:10:27.190 "compare_and_write": false, 00:10:27.190 "abort": true, 00:10:27.190 "seek_hole": false, 00:10:27.190 "seek_data": false, 00:10:27.190 "copy": true, 00:10:27.190 "nvme_iov_md": false 00:10:27.190 }, 00:10:27.190 "memory_domains": [ 00:10:27.190 { 00:10:27.190 "dma_device_id": "system", 00:10:27.190 "dma_device_type": 1 00:10:27.190 }, 00:10:27.190 { 00:10:27.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.190 "dma_device_type": 2 00:10:27.190 } 00:10:27.190 ], 00:10:27.190 "driver_specific": {} 00:10:27.190 } 00:10:27.190 ]' 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:27.190 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:29.101 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:29.101 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:29.101 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:29.101 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:29.101 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:31.015 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:31.276 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:31.537 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.480 ************************************ 00:10:32.480 START TEST filesystem_ext4 00:10:32.480 ************************************ 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:32.480 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:32.480 mke2fs 1.47.0 (5-Feb-2023) 00:10:32.480 Discarding device blocks: 0/522240 done 00:10:32.740 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:32.740 Filesystem UUID: daeb7a00-1608-47f5-8d8e-27386d893d6d 00:10:32.740 Superblock backups stored on blocks: 00:10:32.740 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:32.740 00:10:32.740 Allocating group tables: 0/64 done 00:10:32.740 Writing inode tables: 0/64 done 00:10:35.283 Creating journal (8192 blocks): done 00:10:35.283 Writing superblocks and filesystem accounting information: 0/64 done 00:10:35.283 00:10:35.545 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:35.545 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3296027 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:40.834 00:10:40.834 real 0m8.381s 00:10:40.834 user 0m0.029s 00:10:40.834 sys 0m0.081s 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:40.834 ************************************ 00:10:40.834 END TEST filesystem_ext4 00:10:40.834 ************************************ 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.834 11:09:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.095 ************************************ 00:10:41.095 START TEST filesystem_btrfs 00:10:41.095 ************************************ 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:41.095 btrfs-progs v6.8.1 00:10:41.095 See https://btrfs.readthedocs.io for more information. 00:10:41.095 00:10:41.095 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:41.095 NOTE: several default settings have changed in version 5.15, please make sure 00:10:41.095 this does not affect your deployments: 00:10:41.095 - DUP for metadata (-m dup) 00:10:41.095 - enabled no-holes (-O no-holes) 00:10:41.095 - enabled free-space-tree (-R free-space-tree) 00:10:41.095 00:10:41.095 Label: (null) 00:10:41.095 UUID: a3f014ce-0286-4c4c-a62c-f2048ce328ba 00:10:41.095 Node size: 16384 00:10:41.095 Sector size: 4096 (CPU page size: 4096) 00:10:41.095 Filesystem size: 510.00MiB 00:10:41.095 Block group profiles: 00:10:41.095 Data: single 8.00MiB 00:10:41.095 Metadata: DUP 32.00MiB 00:10:41.095 System: DUP 8.00MiB 00:10:41.095 SSD detected: yes 00:10:41.095 Zoned device: no 00:10:41.095 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:41.095 Checksum: crc32c 00:10:41.095 Number of devices: 1 00:10:41.095 Devices: 00:10:41.095 ID SIZE PATH 00:10:41.095 1 510.00MiB /dev/nvme0n1p1 00:10:41.095 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:41.095 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.039 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.039 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:42.039 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.039 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:42.039 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:42.039 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.039 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3296027 00:10:42.039 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.039 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.039 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.299 00:10:42.299 real 0m1.199s 00:10:42.299 user 0m0.020s 00:10:42.299 sys 0m0.124s 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:42.299 ************************************ 00:10:42.299 END TEST filesystem_btrfs 00:10:42.299 ************************************ 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.299 ************************************ 00:10:42.299 START TEST filesystem_xfs 00:10:42.299 ************************************ 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:42.299 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:43.246 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:43.246 = sectsz=512 attr=2, projid32bit=1 00:10:43.246 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:43.246 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:43.246 data = bsize=4096 blocks=130560, imaxpct=25 00:10:43.246 = sunit=0 swidth=0 blks 00:10:43.246 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:43.246 log =internal log bsize=4096 blocks=16384, version=2 00:10:43.246 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:43.246 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:44.187 Discarding blocks...Done. 00:10:44.187 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:44.187 11:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3296027 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:46.731 00:10:46.731 real 0m4.494s 00:10:46.731 user 0m0.029s 00:10:46.731 sys 0m0.081s 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:46.731 ************************************ 00:10:46.731 END TEST filesystem_xfs 00:10:46.731 ************************************ 00:10:46.731 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:46.992 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:46.992 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3296027 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3296027 ']' 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3296027 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3296027 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3296027' 00:10:47.253 killing process with pid 3296027 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3296027 00:10:47.253 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3296027 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:47.514 00:10:47.514 real 0m21.464s 00:10:47.514 user 1m24.859s 00:10:47.514 sys 0m1.514s 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.514 ************************************ 00:10:47.514 END TEST nvmf_filesystem_no_in_capsule 00:10:47.514 ************************************ 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.514 ************************************ 00:10:47.514 START TEST nvmf_filesystem_in_capsule 00:10:47.514 ************************************ 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3300573 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3300573 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3300573 ']' 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.514 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.775 [2024-12-06 11:09:53.717330] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:10:47.775 [2024-12-06 11:09:53.717379] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.775 [2024-12-06 11:09:53.803152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.775 [2024-12-06 11:09:53.837886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.775 [2024-12-06 11:09:53.837923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.775 [2024-12-06 11:09:53.837931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.775 [2024-12-06 11:09:53.837938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.775 [2024-12-06 11:09:53.837944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.775 [2024-12-06 11:09:53.839522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.775 [2024-12-06 11:09:53.839636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.775 [2024-12-06 11:09:53.839793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.775 [2024-12-06 11:09:53.839793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.345 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.345 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:48.345 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.345 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.345 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.605 [2024-12-06 11:09:54.554669] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.605 Malloc1 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.605 [2024-12-06 11:09:54.687931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:48.605 { 00:10:48.605 "name": "Malloc1", 00:10:48.605 "aliases": [ 00:10:48.605 "6f64ae5f-af44-4613-99ad-4d9b3b02d3e3" 00:10:48.605 ], 00:10:48.605 "product_name": "Malloc disk", 00:10:48.605 "block_size": 512, 00:10:48.605 "num_blocks": 1048576, 00:10:48.605 "uuid": "6f64ae5f-af44-4613-99ad-4d9b3b02d3e3", 00:10:48.605 "assigned_rate_limits": { 00:10:48.605 "rw_ios_per_sec": 0, 00:10:48.605 "rw_mbytes_per_sec": 0, 00:10:48.605 "r_mbytes_per_sec": 0, 00:10:48.605 "w_mbytes_per_sec": 0 00:10:48.605 }, 00:10:48.605 "claimed": true, 00:10:48.605 "claim_type": "exclusive_write", 00:10:48.605 "zoned": false, 00:10:48.605 "supported_io_types": { 00:10:48.605 "read": true, 00:10:48.605 "write": true, 00:10:48.605 "unmap": true, 00:10:48.605 "flush": true, 00:10:48.605 "reset": true, 00:10:48.605 "nvme_admin": false, 00:10:48.605 "nvme_io": false, 00:10:48.605 "nvme_io_md": false, 00:10:48.605 "write_zeroes": true, 00:10:48.605 "zcopy": true, 00:10:48.605 "get_zone_info": false, 00:10:48.605 "zone_management": false, 00:10:48.605 "zone_append": false, 00:10:48.605 "compare": false, 00:10:48.605 "compare_and_write": false, 00:10:48.605 "abort": true, 00:10:48.605 "seek_hole": false, 00:10:48.605 "seek_data": false, 00:10:48.605 "copy": true, 00:10:48.605 "nvme_iov_md": false 00:10:48.605 }, 00:10:48.605 "memory_domains": [ 00:10:48.605 { 00:10:48.605 "dma_device_id": "system", 00:10:48.605 "dma_device_type": 1 00:10:48.605 }, 00:10:48.605 { 00:10:48.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.605 "dma_device_type": 2 00:10:48.605 } 00:10:48.605 ], 00:10:48.605 "driver_specific": {} 00:10:48.605 } 00:10:48.605 ]' 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:48.605 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:48.606 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:48.866 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:48.866 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:48.866 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:48.866 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:48.866 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:50.271 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:50.271 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:50.271 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:50.271 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:50.271 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:52.178 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:52.178 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:52.179 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:52.438 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:52.697 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.648 ************************************ 00:10:53.648 START TEST filesystem_in_capsule_ext4 00:10:53.648 ************************************ 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:53.648 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:53.648 mke2fs 1.47.0 (5-Feb-2023) 00:10:53.950 Discarding device blocks: 0/522240 done 00:10:53.950 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:53.950 Filesystem UUID: 5f58cf13-8472-4fbe-a9d8-87d509e78296 00:10:53.950 Superblock backups stored on blocks: 00:10:53.950 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:53.950 00:10:53.950 Allocating group tables: 0/64 done 00:10:53.950 Writing inode tables: 0/64 done 00:10:53.950 Creating journal (8192 blocks): done 00:10:56.197 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:10:56.197 00:10:56.197 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:56.197 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:02.783 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:02.783 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:02.783 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:02.783 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:02.783 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:02.783 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:02.783 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3300573 00:11:02.783 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:02.784 00:11:02.784 real 0m8.595s 00:11:02.784 user 0m0.031s 00:11:02.784 sys 0m0.077s 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:02.784 ************************************ 00:11:02.784 END TEST filesystem_in_capsule_ext4 00:11:02.784 ************************************ 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.784 ************************************ 00:11:02.784 START TEST filesystem_in_capsule_btrfs 00:11:02.784 ************************************ 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:02.784 btrfs-progs v6.8.1 00:11:02.784 See https://btrfs.readthedocs.io for more information. 00:11:02.784 00:11:02.784 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:02.784 NOTE: several default settings have changed in version 5.15, please make sure 00:11:02.784 this does not affect your deployments: 00:11:02.784 - DUP for metadata (-m dup) 00:11:02.784 - enabled no-holes (-O no-holes) 00:11:02.784 - enabled free-space-tree (-R free-space-tree) 00:11:02.784 00:11:02.784 Label: (null) 00:11:02.784 UUID: 96839a70-4949-4046-9872-bc83070c6bdf 00:11:02.784 Node size: 16384 00:11:02.784 Sector size: 4096 (CPU page size: 4096) 00:11:02.784 Filesystem size: 510.00MiB 00:11:02.784 Block group profiles: 00:11:02.784 Data: single 8.00MiB 00:11:02.784 Metadata: DUP 32.00MiB 00:11:02.784 System: DUP 8.00MiB 00:11:02.784 SSD detected: yes 00:11:02.784 Zoned device: no 00:11:02.784 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:02.784 Checksum: crc32c 00:11:02.784 Number of devices: 1 00:11:02.784 Devices: 00:11:02.784 ID SIZE PATH 00:11:02.784 1 510.00MiB /dev/nvme0n1p1 00:11:02.784 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:02.784 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3300573 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.045 00:11:03.045 real 0m0.608s 00:11:03.045 user 0m0.027s 00:11:03.045 sys 0m0.124s 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:03.045 ************************************ 00:11:03.045 END TEST filesystem_in_capsule_btrfs 00:11:03.045 ************************************ 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.045 ************************************ 00:11:03.045 START TEST filesystem_in_capsule_xfs 00:11:03.045 ************************************ 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:03.045 11:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:03.306 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:03.306 = sectsz=512 attr=2, projid32bit=1 00:11:03.306 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:03.306 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:03.306 data = bsize=4096 blocks=130560, imaxpct=25 00:11:03.306 = sunit=0 swidth=0 blks 00:11:03.306 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:03.306 log =internal log bsize=4096 blocks=16384, version=2 00:11:03.306 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:03.306 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:04.247 Discarding blocks...Done. 00:11:04.247 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:04.248 11:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.163 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3300573 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.425 00:11:06.425 real 0m3.296s 00:11:06.425 user 0m0.030s 00:11:06.425 sys 0m0.076s 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:06.425 ************************************ 00:11:06.425 END TEST filesystem_in_capsule_xfs 00:11:06.425 ************************************ 00:11:06.425 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:06.686 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:06.686 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3300573 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3300573 ']' 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3300573 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.947 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3300573 00:11:06.947 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.947 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.947 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3300573' 00:11:06.947 killing process with pid 3300573 00:11:06.947 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3300573 00:11:06.947 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3300573 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:07.209 00:11:07.209 real 0m19.628s 00:11:07.209 user 1m17.580s 00:11:07.209 sys 0m1.418s 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.209 ************************************ 00:11:07.209 END TEST nvmf_filesystem_in_capsule 00:11:07.209 ************************************ 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.209 rmmod nvme_tcp 00:11:07.209 rmmod nvme_fabrics 00:11:07.209 rmmod nvme_keyring 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.209 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.470 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.470 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.470 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.470 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.470 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.387 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.387 00:11:09.387 real 0m52.301s 00:11:09.387 user 2m45.059s 00:11:09.387 sys 0m9.486s 00:11:09.387 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.387 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:09.387 ************************************ 00:11:09.387 END TEST nvmf_filesystem 00:11:09.387 ************************************ 00:11:09.387 11:10:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:09.387 11:10:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.387 11:10:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.387 11:10:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.387 ************************************ 00:11:09.387 START TEST nvmf_target_discovery 00:11:09.387 ************************************ 00:11:09.387 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:09.648 * Looking for test storage... 00:11:09.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:09.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.648 --rc genhtml_branch_coverage=1 00:11:09.648 --rc genhtml_function_coverage=1 00:11:09.648 --rc genhtml_legend=1 00:11:09.648 --rc geninfo_all_blocks=1 00:11:09.648 --rc geninfo_unexecuted_blocks=1 00:11:09.648 00:11:09.648 ' 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:09.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.648 --rc genhtml_branch_coverage=1 00:11:09.648 --rc genhtml_function_coverage=1 00:11:09.648 --rc genhtml_legend=1 00:11:09.648 --rc geninfo_all_blocks=1 00:11:09.648 --rc geninfo_unexecuted_blocks=1 00:11:09.648 00:11:09.648 ' 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:09.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.648 --rc genhtml_branch_coverage=1 00:11:09.648 --rc genhtml_function_coverage=1 00:11:09.648 --rc genhtml_legend=1 00:11:09.648 --rc geninfo_all_blocks=1 00:11:09.648 --rc geninfo_unexecuted_blocks=1 00:11:09.648 00:11:09.648 ' 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:09.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.648 --rc genhtml_branch_coverage=1 00:11:09.648 --rc genhtml_function_coverage=1 00:11:09.648 --rc genhtml_legend=1 00:11:09.648 --rc geninfo_all_blocks=1 00:11:09.648 --rc geninfo_unexecuted_blocks=1 00:11:09.648 00:11:09.648 ' 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.648 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.649 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:17.785 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:17.785 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.785 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:17.786 Found net devices under 0000:31:00.0: cvl_0_0 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:17.786 Found net devices under 0000:31:00.1: cvl_0_1 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.786 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.046 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.046 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.046 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.046 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:11:18.046 00:11:18.046 --- 10.0.0.2 ping statistics --- 00:11:18.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.046 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:11:18.046 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:11:18.046 00:11:18.046 --- 10.0.0.1 ping statistics --- 00:11:18.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.046 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:11:18.046 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.046 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:18.046 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.046 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.046 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.046 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.046 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.046 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3309191 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3309191 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3309191 ']' 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.047 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.047 [2024-12-06 11:10:24.107547] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:11:18.047 [2024-12-06 11:10:24.107598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.047 [2024-12-06 11:10:24.190301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.307 [2024-12-06 11:10:24.226213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.307 [2024-12-06 11:10:24.226241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.308 [2024-12-06 11:10:24.226249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.308 [2024-12-06 11:10:24.226255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.308 [2024-12-06 11:10:24.226261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.308 [2024-12-06 11:10:24.227928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.308 [2024-12-06 11:10:24.228052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.308 [2024-12-06 11:10:24.228207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.308 [2024-12-06 11:10:24.228208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.881 [2024-12-06 11:10:24.971069] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.881 Null1 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.881 11:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.881 [2024-12-06 11:10:25.030034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.881 Null2 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.881 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 Null3 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 Null4 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.142 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:11:19.402 00:11:19.402 Discovery Log Number of Records 5, Generation counter 6 00:11:19.402 =====Discovery Log Entry 0====== 00:11:19.402 trtype: tcp 00:11:19.402 adrfam: ipv4 00:11:19.402 subtype: current discovery subsystem 00:11:19.402 treq: not required 00:11:19.402 portid: 0 00:11:19.402 trsvcid: 4420 00:11:19.402 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:19.402 traddr: 10.0.0.2 00:11:19.402 eflags: explicit discovery connections, duplicate discovery information 00:11:19.402 sectype: none 00:11:19.402 =====Discovery Log Entry 1====== 00:11:19.402 trtype: tcp 00:11:19.402 adrfam: ipv4 00:11:19.402 subtype: nvme subsystem 00:11:19.402 treq: not required 00:11:19.402 portid: 0 00:11:19.402 trsvcid: 4420 00:11:19.402 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:19.402 traddr: 10.0.0.2 00:11:19.402 eflags: none 00:11:19.402 sectype: none 00:11:19.402 =====Discovery Log Entry 2====== 00:11:19.402 trtype: tcp 00:11:19.402 adrfam: ipv4 00:11:19.402 subtype: nvme subsystem 00:11:19.402 treq: not required 00:11:19.402 portid: 0 00:11:19.402 trsvcid: 4420 00:11:19.402 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:19.402 traddr: 10.0.0.2 00:11:19.402 eflags: none 00:11:19.402 sectype: none 00:11:19.402 =====Discovery Log Entry 3====== 00:11:19.402 trtype: tcp 00:11:19.402 adrfam: ipv4 00:11:19.402 subtype: nvme subsystem 00:11:19.402 treq: not required 00:11:19.402 portid: 0 00:11:19.402 trsvcid: 4420 00:11:19.402 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:19.402 traddr: 10.0.0.2 00:11:19.402 eflags: none 00:11:19.402 sectype: none 00:11:19.402 =====Discovery Log Entry 4====== 00:11:19.402 trtype: tcp 00:11:19.402 adrfam: ipv4 00:11:19.402 subtype: nvme subsystem 00:11:19.402 treq: not required 00:11:19.402 portid: 0 00:11:19.402 trsvcid: 4420 00:11:19.402 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:19.402 traddr: 10.0.0.2 00:11:19.402 eflags: none 00:11:19.402 sectype: none 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:19.402 Perform nvmf subsystem discovery via RPC 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.402 [ 00:11:19.402 { 00:11:19.402 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:19.402 "subtype": "Discovery", 00:11:19.402 "listen_addresses": [ 00:11:19.402 { 00:11:19.402 "trtype": "TCP", 00:11:19.402 "adrfam": "IPv4", 00:11:19.402 "traddr": "10.0.0.2", 00:11:19.402 "trsvcid": "4420" 00:11:19.402 } 00:11:19.402 ], 00:11:19.402 "allow_any_host": true, 00:11:19.402 "hosts": [] 00:11:19.402 }, 00:11:19.402 { 00:11:19.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.402 "subtype": "NVMe", 00:11:19.402 "listen_addresses": [ 00:11:19.402 { 00:11:19.402 "trtype": "TCP", 00:11:19.402 "adrfam": "IPv4", 00:11:19.402 "traddr": "10.0.0.2", 00:11:19.402 "trsvcid": "4420" 00:11:19.402 } 00:11:19.402 ], 00:11:19.402 "allow_any_host": true, 00:11:19.402 "hosts": [], 00:11:19.402 "serial_number": "SPDK00000000000001", 00:11:19.402 "model_number": "SPDK bdev Controller", 00:11:19.402 "max_namespaces": 32, 00:11:19.402 "min_cntlid": 1, 00:11:19.402 "max_cntlid": 65519, 00:11:19.402 "namespaces": [ 00:11:19.402 { 00:11:19.402 "nsid": 1, 00:11:19.402 "bdev_name": "Null1", 00:11:19.402 "name": "Null1", 00:11:19.402 "nguid": "905197897E4446FDA6F55347C41F9A67", 00:11:19.402 "uuid": "90519789-7e44-46fd-a6f5-5347c41f9a67" 00:11:19.402 } 00:11:19.402 ] 00:11:19.402 }, 00:11:19.402 { 00:11:19.402 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:19.402 "subtype": "NVMe", 00:11:19.402 "listen_addresses": [ 00:11:19.402 { 00:11:19.402 "trtype": "TCP", 00:11:19.402 "adrfam": "IPv4", 00:11:19.402 "traddr": "10.0.0.2", 00:11:19.402 "trsvcid": "4420" 00:11:19.402 } 00:11:19.402 ], 00:11:19.402 "allow_any_host": true, 00:11:19.402 "hosts": [], 00:11:19.402 "serial_number": "SPDK00000000000002", 00:11:19.402 "model_number": "SPDK bdev Controller", 00:11:19.402 "max_namespaces": 32, 00:11:19.402 "min_cntlid": 1, 00:11:19.402 "max_cntlid": 65519, 00:11:19.402 "namespaces": [ 00:11:19.402 { 00:11:19.402 "nsid": 1, 00:11:19.402 "bdev_name": "Null2", 00:11:19.402 "name": "Null2", 00:11:19.402 "nguid": "3A805CE2CC6146799F22B85062050108", 00:11:19.402 "uuid": "3a805ce2-cc61-4679-9f22-b85062050108" 00:11:19.402 } 00:11:19.402 ] 00:11:19.402 }, 00:11:19.402 { 00:11:19.402 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:19.402 "subtype": "NVMe", 00:11:19.402 "listen_addresses": [ 00:11:19.402 { 00:11:19.402 "trtype": "TCP", 00:11:19.402 "adrfam": "IPv4", 00:11:19.402 "traddr": "10.0.0.2", 00:11:19.402 "trsvcid": "4420" 00:11:19.402 } 00:11:19.402 ], 00:11:19.402 "allow_any_host": true, 00:11:19.402 "hosts": [], 00:11:19.402 "serial_number": "SPDK00000000000003", 00:11:19.402 "model_number": "SPDK bdev Controller", 00:11:19.402 "max_namespaces": 32, 00:11:19.402 "min_cntlid": 1, 00:11:19.402 "max_cntlid": 65519, 00:11:19.402 "namespaces": [ 00:11:19.402 { 00:11:19.402 "nsid": 1, 00:11:19.402 "bdev_name": "Null3", 00:11:19.402 "name": "Null3", 00:11:19.402 "nguid": "F2D5C28432584EFFBA8F4DD3EB31A91D", 00:11:19.402 "uuid": "f2d5c284-3258-4eff-ba8f-4dd3eb31a91d" 00:11:19.402 } 00:11:19.402 ] 00:11:19.402 }, 00:11:19.402 { 00:11:19.402 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:19.402 "subtype": "NVMe", 00:11:19.402 "listen_addresses": [ 00:11:19.402 { 00:11:19.402 "trtype": "TCP", 00:11:19.402 "adrfam": "IPv4", 00:11:19.402 "traddr": "10.0.0.2", 00:11:19.402 "trsvcid": "4420" 00:11:19.402 } 00:11:19.402 ], 00:11:19.402 "allow_any_host": true, 00:11:19.402 "hosts": [], 00:11:19.402 "serial_number": "SPDK00000000000004", 00:11:19.402 "model_number": "SPDK bdev Controller", 00:11:19.402 "max_namespaces": 32, 00:11:19.402 "min_cntlid": 1, 00:11:19.402 "max_cntlid": 65519, 00:11:19.402 "namespaces": [ 00:11:19.402 { 00:11:19.402 "nsid": 1, 00:11:19.402 "bdev_name": "Null4", 00:11:19.402 "name": "Null4", 00:11:19.402 "nguid": "997764F2802249BAB9A3E2A640F29BE6", 00:11:19.402 "uuid": "997764f2-8022-49ba-b9a3-e2a640f29be6" 00:11:19.402 } 00:11:19.402 ] 00:11:19.402 } 00:11:19.402 ] 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.402 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.403 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.663 rmmod nvme_tcp 00:11:19.663 rmmod nvme_fabrics 00:11:19.663 rmmod nvme_keyring 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3309191 ']' 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3309191 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3309191 ']' 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3309191 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3309191 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3309191' 00:11:19.663 killing process with pid 3309191 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3309191 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3309191 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.663 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.924 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.924 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.924 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.839 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.839 00:11:21.839 real 0m12.365s 00:11:21.839 user 0m8.895s 00:11:21.839 sys 0m6.648s 00:11:21.839 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.839 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.839 ************************************ 00:11:21.839 END TEST nvmf_target_discovery 00:11:21.839 ************************************ 00:11:21.839 11:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:21.839 11:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.839 11:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.839 11:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.839 ************************************ 00:11:21.839 START TEST nvmf_referrals 00:11:21.839 ************************************ 00:11:21.839 11:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:22.102 * Looking for test storage... 00:11:22.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:22.102 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.103 --rc genhtml_branch_coverage=1 00:11:22.103 --rc genhtml_function_coverage=1 00:11:22.103 --rc genhtml_legend=1 00:11:22.103 --rc geninfo_all_blocks=1 00:11:22.103 --rc geninfo_unexecuted_blocks=1 00:11:22.103 00:11:22.103 ' 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.103 --rc genhtml_branch_coverage=1 00:11:22.103 --rc genhtml_function_coverage=1 00:11:22.103 --rc genhtml_legend=1 00:11:22.103 --rc geninfo_all_blocks=1 00:11:22.103 --rc geninfo_unexecuted_blocks=1 00:11:22.103 00:11:22.103 ' 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.103 --rc genhtml_branch_coverage=1 00:11:22.103 --rc genhtml_function_coverage=1 00:11:22.103 --rc genhtml_legend=1 00:11:22.103 --rc geninfo_all_blocks=1 00:11:22.103 --rc geninfo_unexecuted_blocks=1 00:11:22.103 00:11:22.103 ' 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.103 --rc genhtml_branch_coverage=1 00:11:22.103 --rc genhtml_function_coverage=1 00:11:22.103 --rc genhtml_legend=1 00:11:22.103 --rc geninfo_all_blocks=1 00:11:22.103 --rc geninfo_unexecuted_blocks=1 00:11:22.103 00:11:22.103 ' 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.103 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.104 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.243 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.243 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.243 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.243 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.243 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.243 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.243 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.243 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.243 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.243 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:30.243 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:30.244 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:30.244 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:30.244 Found net devices under 0000:31:00.0: cvl_0_0 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:30.244 Found net devices under 0000:31:00.1: cvl_0_1 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:11:30.244 00:11:30.244 --- 10.0.0.2 ping statistics --- 00:11:30.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.244 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:11:30.244 00:11:30.244 --- 10.0.0.1 ping statistics --- 00:11:30.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.244 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:30.244 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:30.245 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.245 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.245 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3314249 00:11:30.245 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3314249 00:11:30.245 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.245 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3314249 ']' 00:11:30.245 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.245 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.245 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.245 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.245 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.505 [2024-12-06 11:10:36.425312] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:11:30.505 [2024-12-06 11:10:36.425386] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.505 [2024-12-06 11:10:36.519626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.505 [2024-12-06 11:10:36.561544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.505 [2024-12-06 11:10:36.561582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.505 [2024-12-06 11:10:36.561590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.505 [2024-12-06 11:10:36.561597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.505 [2024-12-06 11:10:36.561603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.505 [2024-12-06 11:10:36.563489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.505 [2024-12-06 11:10:36.563608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.505 [2024-12-06 11:10:36.563763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.505 [2024-12-06 11:10:36.563764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.076 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.076 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:31.076 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:31.076 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:31.076 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.336 [2024-12-06 11:10:37.287836] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.336 [2024-12-06 11:10:37.314027] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -ah 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 -ah 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 -ah 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.336 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.337 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.337 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:31.337 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:31.337 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:31.337 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.337 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.337 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.337 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.337 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.599 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.860 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:31.860 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:31.860 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery -ah 00:11:31.860 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.860 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 -ah 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.861 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:32.122 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:32.122 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:32.122 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:32.122 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:32.122 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:32.122 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.123 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:32.384 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.646 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:32.907 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:32.907 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:32.907 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:32.907 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:32.907 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.907 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:33.168 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.430 rmmod nvme_tcp 00:11:33.430 rmmod nvme_fabrics 00:11:33.430 rmmod nvme_keyring 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3314249 ']' 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3314249 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3314249 ']' 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3314249 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3314249 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3314249' 00:11:33.430 killing process with pid 3314249 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3314249 00:11:33.430 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3314249 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.692 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.704 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.704 00:11:35.704 real 0m13.796s 00:11:35.704 user 0m15.714s 00:11:35.704 sys 0m7.051s 00:11:35.704 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.704 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.704 ************************************ 00:11:35.704 END TEST nvmf_referrals 00:11:35.704 ************************************ 00:11:35.704 11:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:35.704 11:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.704 11:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.704 11:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.704 ************************************ 00:11:35.704 START TEST nvmf_connect_disconnect 00:11:35.704 ************************************ 00:11:35.704 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:35.967 * Looking for test storage... 00:11:35.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.967 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.967 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.967 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:35.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.967 --rc genhtml_branch_coverage=1 00:11:35.967 --rc genhtml_function_coverage=1 00:11:35.967 --rc genhtml_legend=1 00:11:35.967 --rc geninfo_all_blocks=1 00:11:35.967 --rc geninfo_unexecuted_blocks=1 00:11:35.967 00:11:35.967 ' 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:35.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.967 --rc genhtml_branch_coverage=1 00:11:35.967 --rc genhtml_function_coverage=1 00:11:35.967 --rc genhtml_legend=1 00:11:35.967 --rc geninfo_all_blocks=1 00:11:35.967 --rc geninfo_unexecuted_blocks=1 00:11:35.967 00:11:35.967 ' 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:35.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.967 --rc genhtml_branch_coverage=1 00:11:35.967 --rc genhtml_function_coverage=1 00:11:35.967 --rc genhtml_legend=1 00:11:35.967 --rc geninfo_all_blocks=1 00:11:35.967 --rc geninfo_unexecuted_blocks=1 00:11:35.967 00:11:35.967 ' 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:35.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.967 --rc genhtml_branch_coverage=1 00:11:35.967 --rc genhtml_function_coverage=1 00:11:35.967 --rc genhtml_legend=1 00:11:35.967 --rc geninfo_all_blocks=1 00:11:35.967 --rc geninfo_unexecuted_blocks=1 00:11:35.967 00:11:35.967 ' 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.967 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.968 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.115 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:44.116 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:44.116 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:44.116 Found net devices under 0000:31:00.0: cvl_0_0 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:44.116 Found net devices under 0000:31:00.1: cvl_0_1 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.116 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:11:44.378 00:11:44.378 --- 10.0.0.2 ping statistics --- 00:11:44.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.378 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:11:44.378 00:11:44.378 --- 10.0.0.1 ping statistics --- 00:11:44.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.378 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.378 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3319777 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3319777 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3319777 ']' 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.670 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.670 [2024-12-06 11:10:50.614601] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:11:44.670 [2024-12-06 11:10:50.614653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.670 [2024-12-06 11:10:50.704270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.670 [2024-12-06 11:10:50.741812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.670 [2024-12-06 11:10:50.741849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.670 [2024-12-06 11:10:50.741857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.670 [2024-12-06 11:10:50.741869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.670 [2024-12-06 11:10:50.741875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.670 [2024-12-06 11:10:50.743464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.670 [2024-12-06 11:10:50.743580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.670 [2024-12-06 11:10:50.743736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.670 [2024-12-06 11:10:50.743736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:45.611 [2024-12-06 11:10:51.466656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:45.611 [2024-12-06 11:10:51.535347] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:45.611 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:49.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.932 rmmod nvme_tcp 00:12:03.932 rmmod nvme_fabrics 00:12:03.932 rmmod nvme_keyring 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3319777 ']' 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3319777 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3319777 ']' 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3319777 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3319777 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3319777' 00:12:03.932 killing process with pid 3319777 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3319777 00:12:03.932 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3319777 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.932 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:06.475 00:12:06.475 real 0m30.262s 00:12:06.475 user 1m19.201s 00:12:06.475 sys 0m7.878s 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:06.475 ************************************ 00:12:06.475 END TEST nvmf_connect_disconnect 00:12:06.475 ************************************ 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.475 ************************************ 00:12:06.475 START TEST nvmf_multitarget 00:12:06.475 ************************************ 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:06.475 * Looking for test storage... 00:12:06.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.475 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:06.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.475 --rc genhtml_branch_coverage=1 00:12:06.475 --rc genhtml_function_coverage=1 00:12:06.475 --rc genhtml_legend=1 00:12:06.476 --rc geninfo_all_blocks=1 00:12:06.476 --rc geninfo_unexecuted_blocks=1 00:12:06.476 00:12:06.476 ' 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:06.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.476 --rc genhtml_branch_coverage=1 00:12:06.476 --rc genhtml_function_coverage=1 00:12:06.476 --rc genhtml_legend=1 00:12:06.476 --rc geninfo_all_blocks=1 00:12:06.476 --rc geninfo_unexecuted_blocks=1 00:12:06.476 00:12:06.476 ' 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:06.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.476 --rc genhtml_branch_coverage=1 00:12:06.476 --rc genhtml_function_coverage=1 00:12:06.476 --rc genhtml_legend=1 00:12:06.476 --rc geninfo_all_blocks=1 00:12:06.476 --rc geninfo_unexecuted_blocks=1 00:12:06.476 00:12:06.476 ' 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:06.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.476 --rc genhtml_branch_coverage=1 00:12:06.476 --rc genhtml_function_coverage=1 00:12:06.476 --rc genhtml_legend=1 00:12:06.476 --rc geninfo_all_blocks=1 00:12:06.476 --rc geninfo_unexecuted_blocks=1 00:12:06.476 00:12:06.476 ' 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:06.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:06.476 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:14.621 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:14.621 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.621 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:14.622 Found net devices under 0000:31:00.0: cvl_0_0 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:14.622 Found net devices under 0000:31:00.1: cvl_0_1 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.622 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.883 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.883 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.883 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:14.883 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.883 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.883 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.883 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:14.883 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:14.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:12:14.883 00:12:14.883 --- 10.0.0.2 ping statistics --- 00:12:14.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.883 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:12:14.883 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:12:14.883 00:12:14.883 --- 10.0.0.1 ping statistics --- 00:12:14.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.883 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:12:15.144 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.144 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:15.144 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.144 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.144 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.144 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3328493 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3328493 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3328493 ']' 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.145 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.145 [2024-12-06 11:11:21.154813] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:12:15.145 [2024-12-06 11:11:21.154888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.145 [2024-12-06 11:11:21.247127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.145 [2024-12-06 11:11:21.289596] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.145 [2024-12-06 11:11:21.289632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.145 [2024-12-06 11:11:21.289640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.145 [2024-12-06 11:11:21.289647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.145 [2024-12-06 11:11:21.289653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.145 [2024-12-06 11:11:21.291260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.145 [2024-12-06 11:11:21.291369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.145 [2024-12-06 11:11:21.291525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.145 [2024-12-06 11:11:21.291526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.088 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.089 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:16.089 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.089 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.089 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:16.089 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.089 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:16.089 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:16.089 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:16.089 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:16.089 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:16.089 "nvmf_tgt_1" 00:12:16.089 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:16.350 "nvmf_tgt_2" 00:12:16.350 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:16.350 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:16.350 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:16.350 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:16.612 true 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:16.612 true 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.612 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.612 rmmod nvme_tcp 00:12:16.612 rmmod nvme_fabrics 00:12:16.874 rmmod nvme_keyring 00:12:16.874 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.874 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:16.874 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:16.874 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3328493 ']' 00:12:16.875 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3328493 00:12:16.875 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3328493 ']' 00:12:16.875 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3328493 00:12:16.875 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:16.875 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.875 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3328493 00:12:16.875 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.875 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.875 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3328493' 00:12:16.875 killing process with pid 3328493 00:12:16.875 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3328493 00:12:16.875 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3328493 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.875 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.420 00:12:19.420 real 0m12.878s 00:12:19.420 user 0m10.069s 00:12:19.420 sys 0m7.033s 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 ************************************ 00:12:19.420 END TEST nvmf_multitarget 00:12:19.420 ************************************ 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 ************************************ 00:12:19.420 START TEST nvmf_rpc 00:12:19.420 ************************************ 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:19.420 * Looking for test storage... 00:12:19.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.420 --rc genhtml_branch_coverage=1 00:12:19.420 --rc genhtml_function_coverage=1 00:12:19.420 --rc genhtml_legend=1 00:12:19.420 --rc geninfo_all_blocks=1 00:12:19.420 --rc geninfo_unexecuted_blocks=1 00:12:19.420 00:12:19.420 ' 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.420 --rc genhtml_branch_coverage=1 00:12:19.420 --rc genhtml_function_coverage=1 00:12:19.420 --rc genhtml_legend=1 00:12:19.420 --rc geninfo_all_blocks=1 00:12:19.420 --rc geninfo_unexecuted_blocks=1 00:12:19.420 00:12:19.420 ' 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.420 --rc genhtml_branch_coverage=1 00:12:19.420 --rc genhtml_function_coverage=1 00:12:19.420 --rc genhtml_legend=1 00:12:19.420 --rc geninfo_all_blocks=1 00:12:19.420 --rc geninfo_unexecuted_blocks=1 00:12:19.420 00:12:19.420 ' 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.420 --rc genhtml_branch_coverage=1 00:12:19.420 --rc genhtml_function_coverage=1 00:12:19.420 --rc genhtml_legend=1 00:12:19.420 --rc geninfo_all_blocks=1 00:12:19.420 --rc geninfo_unexecuted_blocks=1 00:12:19.420 00:12:19.420 ' 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.420 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.421 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:27.566 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:27.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:27.566 Found net devices under 0000:31:00.0: cvl_0_0 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:27.566 Found net devices under 0000:31:00.1: cvl_0_1 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.566 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:27.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:12:27.567 00:12:27.567 --- 10.0.0.2 ping statistics --- 00:12:27.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.567 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:12:27.567 00:12:27.567 --- 10.0.0.1 ping statistics --- 00:12:27.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.567 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:27.567 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3333561 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3333561 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3333561 ']' 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.830 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.830 [2024-12-06 11:11:33.810804] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:12:27.830 [2024-12-06 11:11:33.810896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.830 [2024-12-06 11:11:33.898651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.830 [2024-12-06 11:11:33.934464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.830 [2024-12-06 11:11:33.934495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.830 [2024-12-06 11:11:33.934503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.830 [2024-12-06 11:11:33.934510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.830 [2024-12-06 11:11:33.934515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.830 [2024-12-06 11:11:33.935983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.830 [2024-12-06 11:11:33.936090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.830 [2024-12-06 11:11:33.936246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.830 [2024-12-06 11:11:33.936246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.776 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.776 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:28.776 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:28.776 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:28.776 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.776 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.776 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:28.776 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.776 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.776 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.776 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:28.776 "tick_rate": 2400000000, 00:12:28.776 "poll_groups": [ 00:12:28.776 { 00:12:28.776 "name": "nvmf_tgt_poll_group_000", 00:12:28.776 "admin_qpairs": 0, 00:12:28.776 "io_qpairs": 0, 00:12:28.776 "current_admin_qpairs": 0, 00:12:28.776 "current_io_qpairs": 0, 00:12:28.776 "pending_bdev_io": 0, 00:12:28.776 "completed_nvme_io": 0, 00:12:28.776 "transports": [] 00:12:28.776 }, 00:12:28.777 { 00:12:28.777 "name": "nvmf_tgt_poll_group_001", 00:12:28.777 "admin_qpairs": 0, 00:12:28.777 "io_qpairs": 0, 00:12:28.777 "current_admin_qpairs": 0, 00:12:28.777 "current_io_qpairs": 0, 00:12:28.777 "pending_bdev_io": 0, 00:12:28.777 "completed_nvme_io": 0, 00:12:28.777 "transports": [] 00:12:28.777 }, 00:12:28.777 { 00:12:28.777 "name": "nvmf_tgt_poll_group_002", 00:12:28.777 "admin_qpairs": 0, 00:12:28.777 "io_qpairs": 0, 00:12:28.777 "current_admin_qpairs": 0, 00:12:28.777 "current_io_qpairs": 0, 00:12:28.777 "pending_bdev_io": 0, 00:12:28.777 "completed_nvme_io": 0, 00:12:28.777 "transports": [] 00:12:28.777 }, 00:12:28.777 { 00:12:28.777 "name": "nvmf_tgt_poll_group_003", 00:12:28.777 "admin_qpairs": 0, 00:12:28.777 "io_qpairs": 0, 00:12:28.777 "current_admin_qpairs": 0, 00:12:28.777 "current_io_qpairs": 0, 00:12:28.777 "pending_bdev_io": 0, 00:12:28.777 "completed_nvme_io": 0, 00:12:28.777 "transports": [] 00:12:28.777 } 00:12:28.777 ] 00:12:28.777 }' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.777 [2024-12-06 11:11:34.747351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:28.777 "tick_rate": 2400000000, 00:12:28.777 "poll_groups": [ 00:12:28.777 { 00:12:28.777 "name": "nvmf_tgt_poll_group_000", 00:12:28.777 "admin_qpairs": 0, 00:12:28.777 "io_qpairs": 0, 00:12:28.777 "current_admin_qpairs": 0, 00:12:28.777 "current_io_qpairs": 0, 00:12:28.777 "pending_bdev_io": 0, 00:12:28.777 "completed_nvme_io": 0, 00:12:28.777 "transports": [ 00:12:28.777 { 00:12:28.777 "trtype": "TCP" 00:12:28.777 } 00:12:28.777 ] 00:12:28.777 }, 00:12:28.777 { 00:12:28.777 "name": "nvmf_tgt_poll_group_001", 00:12:28.777 "admin_qpairs": 0, 00:12:28.777 "io_qpairs": 0, 00:12:28.777 "current_admin_qpairs": 0, 00:12:28.777 "current_io_qpairs": 0, 00:12:28.777 "pending_bdev_io": 0, 00:12:28.777 "completed_nvme_io": 0, 00:12:28.777 "transports": [ 00:12:28.777 { 00:12:28.777 "trtype": "TCP" 00:12:28.777 } 00:12:28.777 ] 00:12:28.777 }, 00:12:28.777 { 00:12:28.777 "name": "nvmf_tgt_poll_group_002", 00:12:28.777 "admin_qpairs": 0, 00:12:28.777 "io_qpairs": 0, 00:12:28.777 "current_admin_qpairs": 0, 00:12:28.777 "current_io_qpairs": 0, 00:12:28.777 "pending_bdev_io": 0, 00:12:28.777 "completed_nvme_io": 0, 00:12:28.777 "transports": [ 00:12:28.777 { 00:12:28.777 "trtype": "TCP" 00:12:28.777 } 00:12:28.777 ] 00:12:28.777 }, 00:12:28.777 { 00:12:28.777 "name": "nvmf_tgt_poll_group_003", 00:12:28.777 "admin_qpairs": 0, 00:12:28.777 "io_qpairs": 0, 00:12:28.777 "current_admin_qpairs": 0, 00:12:28.777 "current_io_qpairs": 0, 00:12:28.777 "pending_bdev_io": 0, 00:12:28.777 "completed_nvme_io": 0, 00:12:28.777 "transports": [ 00:12:28.777 { 00:12:28.777 "trtype": "TCP" 00:12:28.777 } 00:12:28.777 ] 00:12:28.777 } 00:12:28.777 ] 00:12:28.777 }' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.777 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.777 Malloc1 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.778 [2024-12-06 11:11:34.926369] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:28.778 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:29.039 [2024-12-06 11:11:34.963435] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:29.039 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:29.039 could not add new controller: failed to write to nvme-fabrics device 00:12:29.039 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:29.039 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:29.039 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:29.039 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:29.039 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.039 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.039 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.039 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.039 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.424 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.424 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:30.424 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.424 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:30.424 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:32.467 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:32.467 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:32.467 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.467 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:32.467 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.467 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:32.467 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.753 [2024-12-06 11:11:38.689163] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:32.753 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:32.753 could not add new controller: failed to write to nvme-fabrics device 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.753 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.160 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.160 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:34.160 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.160 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:34.160 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:36.077 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:36.077 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:36.077 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.077 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:36.077 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.077 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:36.077 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 [2024-12-06 11:11:42.427390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.339 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.254 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.254 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:38.254 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.254 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:38.254 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.167 [2024-12-06 11:11:46.188179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.167 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.553 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.553 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:41.553 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.553 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:41.553 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.095 [2024-12-06 11:11:49.907635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.095 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.476 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.476 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:45.476 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.476 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:45.476 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.388 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 [2024-12-06 11:11:53.588533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.649 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.031 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.031 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:49.031 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.031 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:49.031 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.603 [2024-12-06 11:11:57.344769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.603 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.989 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.989 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:52.989 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.989 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:52.989 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:54.902 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:54.902 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:54.902 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.902 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:54.902 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.902 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:54.902 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.903 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.903 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:54.903 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:54.903 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.903 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:54.903 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.163 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 [2024-12-06 11:12:01.115036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 [2024-12-06 11:12:01.179161] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 [2024-12-06 11:12:01.251378] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.164 [2024-12-06 11:12:01.319589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.164 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 [2024-12-06 11:12:01.387826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:55.426 "tick_rate": 2400000000, 00:12:55.426 "poll_groups": [ 00:12:55.426 { 00:12:55.426 "name": "nvmf_tgt_poll_group_000", 00:12:55.426 "admin_qpairs": 0, 00:12:55.426 "io_qpairs": 224, 00:12:55.426 "current_admin_qpairs": 0, 00:12:55.426 "current_io_qpairs": 0, 00:12:55.426 "pending_bdev_io": 0, 00:12:55.426 "completed_nvme_io": 421, 00:12:55.426 "transports": [ 00:12:55.426 { 00:12:55.426 "trtype": "TCP" 00:12:55.426 } 00:12:55.426 ] 00:12:55.426 }, 00:12:55.426 { 00:12:55.426 "name": "nvmf_tgt_poll_group_001", 00:12:55.426 "admin_qpairs": 1, 00:12:55.426 "io_qpairs": 223, 00:12:55.426 "current_admin_qpairs": 0, 00:12:55.426 "current_io_qpairs": 0, 00:12:55.426 "pending_bdev_io": 0, 00:12:55.426 "completed_nvme_io": 224, 00:12:55.426 "transports": [ 00:12:55.426 { 00:12:55.426 "trtype": "TCP" 00:12:55.426 } 00:12:55.426 ] 00:12:55.426 }, 00:12:55.426 { 00:12:55.426 "name": "nvmf_tgt_poll_group_002", 00:12:55.426 "admin_qpairs": 6, 00:12:55.426 "io_qpairs": 218, 00:12:55.426 "current_admin_qpairs": 0, 00:12:55.426 "current_io_qpairs": 0, 00:12:55.426 "pending_bdev_io": 0, 00:12:55.426 "completed_nvme_io": 222, 00:12:55.426 "transports": [ 00:12:55.426 { 00:12:55.426 "trtype": "TCP" 00:12:55.426 } 00:12:55.426 ] 00:12:55.426 }, 00:12:55.426 { 00:12:55.426 "name": "nvmf_tgt_poll_group_003", 00:12:55.426 "admin_qpairs": 0, 00:12:55.426 "io_qpairs": 224, 00:12:55.426 "current_admin_qpairs": 0, 00:12:55.426 "current_io_qpairs": 0, 00:12:55.426 "pending_bdev_io": 0, 00:12:55.426 "completed_nvme_io": 372, 00:12:55.426 "transports": [ 00:12:55.426 { 00:12:55.426 "trtype": "TCP" 00:12:55.426 } 00:12:55.426 ] 00:12:55.426 } 00:12:55.426 ] 00:12:55.426 }' 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.426 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.426 rmmod nvme_tcp 00:12:55.426 rmmod nvme_fabrics 00:12:55.426 rmmod nvme_keyring 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3333561 ']' 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3333561 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3333561 ']' 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3333561 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3333561 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3333561' 00:12:55.688 killing process with pid 3333561 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3333561 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3333561 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.688 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.236 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:58.236 00:12:58.236 real 0m38.735s 00:12:58.236 user 1m53.531s 00:12:58.236 sys 0m8.559s 00:12:58.236 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.236 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.236 ************************************ 00:12:58.236 END TEST nvmf_rpc 00:12:58.236 ************************************ 00:12:58.236 11:12:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:58.236 11:12:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.236 11:12:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.236 11:12:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.236 ************************************ 00:12:58.236 START TEST nvmf_invalid 00:12:58.236 ************************************ 00:12:58.236 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:58.236 * Looking for test storage... 00:12:58.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.236 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:58.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.237 --rc genhtml_branch_coverage=1 00:12:58.237 --rc genhtml_function_coverage=1 00:12:58.237 --rc genhtml_legend=1 00:12:58.237 --rc geninfo_all_blocks=1 00:12:58.237 --rc geninfo_unexecuted_blocks=1 00:12:58.237 00:12:58.237 ' 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:58.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.237 --rc genhtml_branch_coverage=1 00:12:58.237 --rc genhtml_function_coverage=1 00:12:58.237 --rc genhtml_legend=1 00:12:58.237 --rc geninfo_all_blocks=1 00:12:58.237 --rc geninfo_unexecuted_blocks=1 00:12:58.237 00:12:58.237 ' 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:58.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.237 --rc genhtml_branch_coverage=1 00:12:58.237 --rc genhtml_function_coverage=1 00:12:58.237 --rc genhtml_legend=1 00:12:58.237 --rc geninfo_all_blocks=1 00:12:58.237 --rc geninfo_unexecuted_blocks=1 00:12:58.237 00:12:58.237 ' 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:58.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.237 --rc genhtml_branch_coverage=1 00:12:58.237 --rc genhtml_function_coverage=1 00:12:58.237 --rc genhtml_legend=1 00:12:58.237 --rc geninfo_all_blocks=1 00:12:58.237 --rc geninfo_unexecuted_blocks=1 00:12:58.237 00:12:58.237 ' 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:58.237 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:58.238 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:06.385 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:06.385 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:06.385 Found net devices under 0000:31:00.0: cvl_0_0 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:06.385 Found net devices under 0000:31:00.1: cvl_0_1 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.385 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:06.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:13:06.646 00:13:06.646 --- 10.0.0.2 ping statistics --- 00:13:06.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.646 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:13:06.646 00:13:06.646 --- 10.0.0.1 ping statistics --- 00:13:06.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.646 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3344367 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3344367 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3344367 ']' 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.646 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.907 [2024-12-06 11:12:12.818108] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:06.907 [2024-12-06 11:12:12.818174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.907 [2024-12-06 11:12:12.913312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.907 [2024-12-06 11:12:12.955030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.907 [2024-12-06 11:12:12.955086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.907 [2024-12-06 11:12:12.955095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.907 [2024-12-06 11:12:12.955102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.907 [2024-12-06 11:12:12.955107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.907 [2024-12-06 11:12:12.956741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.907 [2024-12-06 11:12:12.956859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.907 [2024-12-06 11:12:12.957016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.907 [2024-12-06 11:12:12.957017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.478 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.478 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:07.478 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:07.478 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:07.478 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:07.739 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.739 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:07.739 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28362 00:13:07.739 [2024-12-06 11:12:13.825284] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:07.739 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:07.739 { 00:13:07.739 "nqn": "nqn.2016-06.io.spdk:cnode28362", 00:13:07.739 "tgt_name": "foobar", 00:13:07.739 "method": "nvmf_create_subsystem", 00:13:07.739 "req_id": 1 00:13:07.739 } 00:13:07.739 Got JSON-RPC error response 00:13:07.739 response: 00:13:07.739 { 00:13:07.739 "code": -32603, 00:13:07.739 "message": "Unable to find target foobar" 00:13:07.739 }' 00:13:07.739 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:07.739 { 00:13:07.739 "nqn": "nqn.2016-06.io.spdk:cnode28362", 00:13:07.739 "tgt_name": "foobar", 00:13:07.739 "method": "nvmf_create_subsystem", 00:13:07.739 "req_id": 1 00:13:07.739 } 00:13:07.739 Got JSON-RPC error response 00:13:07.739 response: 00:13:07.739 { 00:13:07.739 "code": -32603, 00:13:07.739 "message": "Unable to find target foobar" 00:13:07.739 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:07.739 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:07.739 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29434 00:13:07.999 [2024-12-06 11:12:14.017952] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29434: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:07.999 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:07.999 { 00:13:07.999 "nqn": "nqn.2016-06.io.spdk:cnode29434", 00:13:07.999 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:07.999 "method": "nvmf_create_subsystem", 00:13:07.999 "req_id": 1 00:13:07.999 } 00:13:07.999 Got JSON-RPC error response 00:13:07.999 response: 00:13:07.999 { 00:13:07.999 "code": -32602, 00:13:07.999 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:07.999 }' 00:13:07.999 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:07.999 { 00:13:07.999 "nqn": "nqn.2016-06.io.spdk:cnode29434", 00:13:07.999 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:07.999 "method": "nvmf_create_subsystem", 00:13:07.999 "req_id": 1 00:13:07.999 } 00:13:07.999 Got JSON-RPC error response 00:13:07.999 response: 00:13:07.999 { 00:13:07.999 "code": -32602, 00:13:07.999 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:07.999 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:07.999 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:07.999 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25751 00:13:08.260 [2024-12-06 11:12:14.210538] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25751: invalid model number 'SPDK_Controller' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:08.260 { 00:13:08.260 "nqn": "nqn.2016-06.io.spdk:cnode25751", 00:13:08.260 "model_number": "SPDK_Controller\u001f", 00:13:08.260 "method": "nvmf_create_subsystem", 00:13:08.260 "req_id": 1 00:13:08.260 } 00:13:08.260 Got JSON-RPC error response 00:13:08.260 response: 00:13:08.260 { 00:13:08.260 "code": -32602, 00:13:08.260 "message": "Invalid MN SPDK_Controller\u001f" 00:13:08.260 }' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:08.260 { 00:13:08.260 "nqn": "nqn.2016-06.io.spdk:cnode25751", 00:13:08.260 "model_number": "SPDK_Controller\u001f", 00:13:08.260 "method": "nvmf_create_subsystem", 00:13:08.260 "req_id": 1 00:13:08.260 } 00:13:08.260 Got JSON-RPC error response 00:13:08.260 response: 00:13:08.260 { 00:13:08.260 "code": -32602, 00:13:08.260 "message": "Invalid MN SPDK_Controller\u001f" 00:13:08.260 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:08.260 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:13:08.261 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '8verb75,v2x!,{0wcV /dev/null' 00:13:10.868 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.462 00:13:13.462 real 0m15.096s 00:13:13.462 user 0m21.011s 00:13:13.462 sys 0m7.385s 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:13.462 ************************************ 00:13:13.462 END TEST nvmf_invalid 00:13:13.462 ************************************ 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.462 ************************************ 00:13:13.462 START TEST nvmf_connect_stress 00:13:13.462 ************************************ 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:13.462 * Looking for test storage... 00:13:13.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:13.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.462 --rc genhtml_branch_coverage=1 00:13:13.462 --rc genhtml_function_coverage=1 00:13:13.462 --rc genhtml_legend=1 00:13:13.462 --rc geninfo_all_blocks=1 00:13:13.462 --rc geninfo_unexecuted_blocks=1 00:13:13.462 00:13:13.462 ' 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:13.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.462 --rc genhtml_branch_coverage=1 00:13:13.462 --rc genhtml_function_coverage=1 00:13:13.462 --rc genhtml_legend=1 00:13:13.462 --rc geninfo_all_blocks=1 00:13:13.462 --rc geninfo_unexecuted_blocks=1 00:13:13.462 00:13:13.462 ' 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:13.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.462 --rc genhtml_branch_coverage=1 00:13:13.462 --rc genhtml_function_coverage=1 00:13:13.462 --rc genhtml_legend=1 00:13:13.462 --rc geninfo_all_blocks=1 00:13:13.462 --rc geninfo_unexecuted_blocks=1 00:13:13.462 00:13:13.462 ' 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:13.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.462 --rc genhtml_branch_coverage=1 00:13:13.462 --rc genhtml_function_coverage=1 00:13:13.462 --rc genhtml_legend=1 00:13:13.462 --rc geninfo_all_blocks=1 00:13:13.462 --rc geninfo_unexecuted_blocks=1 00:13:13.462 00:13:13.462 ' 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.462 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:13.463 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:21.693 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:21.693 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:21.693 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:21.694 Found net devices under 0000:31:00.0: cvl_0_0 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:21.694 Found net devices under 0000:31:00.1: cvl_0_1 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:21.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:13:21.694 00:13:21.694 --- 10.0.0.2 ping statistics --- 00:13:21.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.694 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:13:21.694 00:13:21.694 --- 10.0.0.1 ping statistics --- 00:13:21.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.694 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:21.694 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3350214 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3350214 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3350214 ']' 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.956 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.956 [2024-12-06 11:12:27.946755] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:21.956 [2024-12-06 11:12:27.946825] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.956 [2024-12-06 11:12:28.055045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:21.956 [2024-12-06 11:12:28.106028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.956 [2024-12-06 11:12:28.106080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.956 [2024-12-06 11:12:28.106089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.956 [2024-12-06 11:12:28.106096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.956 [2024-12-06 11:12:28.106102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.956 [2024-12-06 11:12:28.107955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.956 [2024-12-06 11:12:28.108251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.956 [2024-12-06 11:12:28.108251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.899 [2024-12-06 11:12:28.792312] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.899 [2024-12-06 11:12:28.816650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:22.899 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.900 NULL1 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3350436 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.900 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.162 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.162 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:23.162 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.162 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.162 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.733 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.733 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:23.733 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.733 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.733 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.994 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.994 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:23.994 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.994 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.994 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.255 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.255 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:24.255 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.255 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.255 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.515 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.515 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:24.515 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.515 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.515 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.774 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.774 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:24.774 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.774 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.774 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.343 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.343 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:25.343 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.343 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.343 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.603 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.603 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:25.603 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.603 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.603 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.863 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.863 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:25.863 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.863 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.863 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.124 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.124 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:26.124 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.124 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.124 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:26.386 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.386 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.960 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.960 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:26.960 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.960 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.960 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.221 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.221 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:27.221 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.221 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.221 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.483 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.483 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:27.483 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.483 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.483 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.744 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.744 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:27.744 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.744 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.744 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.005 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.005 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:28.005 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.005 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.005 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.577 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.577 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:28.577 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.577 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.577 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.837 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.837 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:28.837 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.837 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.837 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.098 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.098 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:29.098 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.098 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.098 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.361 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.361 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:29.361 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.361 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.361 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.621 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.621 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:29.621 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.621 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.621 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.193 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.193 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:30.193 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.193 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.193 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.453 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.453 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:30.453 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.453 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.453 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.713 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.713 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:30.713 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.713 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.713 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.973 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.973 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:30.973 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.973 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.973 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.545 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.545 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:31.545 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.545 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.545 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.805 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.805 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:31.805 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.805 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.805 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.065 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.065 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:32.065 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.065 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.065 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.325 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.325 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:32.325 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.325 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.325 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.585 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.585 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:32.585 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.585 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.585 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.846 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3350436 00:13:33.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3350436) - No such process 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3350436 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.107 rmmod nvme_tcp 00:13:33.107 rmmod nvme_fabrics 00:13:33.107 rmmod nvme_keyring 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3350214 ']' 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3350214 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3350214 ']' 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3350214 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3350214 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3350214' 00:13:33.107 killing process with pid 3350214 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3350214 00:13:33.107 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3350214 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.368 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.283 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:35.283 00:13:35.283 real 0m22.232s 00:13:35.283 user 0m42.601s 00:13:35.283 sys 0m9.747s 00:13:35.283 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.283 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.283 ************************************ 00:13:35.283 END TEST nvmf_connect_stress 00:13:35.283 ************************************ 00:13:35.283 11:12:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:35.283 11:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:35.283 11:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.283 11:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.544 ************************************ 00:13:35.544 START TEST nvmf_fused_ordering 00:13:35.544 ************************************ 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:35.544 * Looking for test storage... 00:13:35.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:35.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.544 --rc genhtml_branch_coverage=1 00:13:35.544 --rc genhtml_function_coverage=1 00:13:35.544 --rc genhtml_legend=1 00:13:35.544 --rc geninfo_all_blocks=1 00:13:35.544 --rc geninfo_unexecuted_blocks=1 00:13:35.544 00:13:35.544 ' 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:35.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.544 --rc genhtml_branch_coverage=1 00:13:35.544 --rc genhtml_function_coverage=1 00:13:35.544 --rc genhtml_legend=1 00:13:35.544 --rc geninfo_all_blocks=1 00:13:35.544 --rc geninfo_unexecuted_blocks=1 00:13:35.544 00:13:35.544 ' 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:35.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.544 --rc genhtml_branch_coverage=1 00:13:35.544 --rc genhtml_function_coverage=1 00:13:35.544 --rc genhtml_legend=1 00:13:35.544 --rc geninfo_all_blocks=1 00:13:35.544 --rc geninfo_unexecuted_blocks=1 00:13:35.544 00:13:35.544 ' 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:35.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.544 --rc genhtml_branch_coverage=1 00:13:35.544 --rc genhtml_function_coverage=1 00:13:35.544 --rc genhtml_legend=1 00:13:35.544 --rc geninfo_all_blocks=1 00:13:35.544 --rc geninfo_unexecuted_blocks=1 00:13:35.544 00:13:35.544 ' 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.544 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.545 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.806 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:35.806 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:35.806 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:35.806 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:43.942 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:43.942 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:43.942 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:43.943 Found net devices under 0000:31:00.0: cvl_0_0 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:43.943 Found net devices under 0000:31:00.1: cvl_0_1 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.943 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.943 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.943 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.943 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:43.943 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:44.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:13:44.202 00:13:44.202 --- 10.0.0.2 ping statistics --- 00:13:44.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.202 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:13:44.202 00:13:44.202 --- 10.0.0.1 ping statistics --- 00:13:44.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.202 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3357277 00:13:44.202 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3357277 00:13:44.203 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:44.203 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3357277 ']' 00:13:44.203 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.203 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.203 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.203 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.203 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.203 [2024-12-06 11:12:50.259592] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:44.203 [2024-12-06 11:12:50.259646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.203 [2024-12-06 11:12:50.361549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.462 [2024-12-06 11:12:50.399944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.462 [2024-12-06 11:12:50.399978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.462 [2024-12-06 11:12:50.399986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.462 [2024-12-06 11:12:50.399997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.462 [2024-12-06 11:12:50.400003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.462 [2024-12-06 11:12:50.400664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:45.034 [2024-12-06 11:12:51.121974] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:45.034 [2024-12-06 11:12:51.146298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:45.034 NULL1 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.034 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:45.035 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.035 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:45.035 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.035 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:45.295 [2024-12-06 11:12:51.218212] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:45.295 [2024-12-06 11:12:51.218292] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3357323 ] 00:13:45.866 Attached to nqn.2016-06.io.spdk:cnode1 00:13:45.866 Namespace ID: 1 size: 1GB 00:13:45.866 fused_ordering(0) 00:13:45.866 fused_ordering(1) 00:13:45.866 fused_ordering(2) 00:13:45.866 fused_ordering(3) 00:13:45.866 fused_ordering(4) 00:13:45.866 fused_ordering(5) 00:13:45.866 fused_ordering(6) 00:13:45.866 fused_ordering(7) 00:13:45.866 fused_ordering(8) 00:13:45.866 fused_ordering(9) 00:13:45.866 fused_ordering(10) 00:13:45.866 fused_ordering(11) 00:13:45.866 fused_ordering(12) 00:13:45.866 fused_ordering(13) 00:13:45.866 fused_ordering(14) 00:13:45.866 fused_ordering(15) 00:13:45.866 fused_ordering(16) 00:13:45.866 fused_ordering(17) 00:13:45.866 fused_ordering(18) 00:13:45.866 fused_ordering(19) 00:13:45.866 fused_ordering(20) 00:13:45.866 fused_ordering(21) 00:13:45.866 fused_ordering(22) 00:13:45.866 fused_ordering(23) 00:13:45.866 fused_ordering(24) 00:13:45.866 fused_ordering(25) 00:13:45.866 fused_ordering(26) 00:13:45.866 fused_ordering(27) 00:13:45.866 fused_ordering(28) 00:13:45.866 fused_ordering(29) 00:13:45.866 fused_ordering(30) 00:13:45.866 fused_ordering(31) 00:13:45.866 fused_ordering(32) 00:13:45.866 fused_ordering(33) 00:13:45.866 fused_ordering(34) 00:13:45.866 fused_ordering(35) 00:13:45.866 fused_ordering(36) 00:13:45.866 fused_ordering(37) 00:13:45.866 fused_ordering(38) 00:13:45.866 fused_ordering(39) 00:13:45.866 fused_ordering(40) 00:13:45.866 fused_ordering(41) 00:13:45.866 fused_ordering(42) 00:13:45.866 fused_ordering(43) 00:13:45.866 fused_ordering(44) 00:13:45.866 fused_ordering(45) 00:13:45.866 fused_ordering(46) 00:13:45.866 fused_ordering(47) 00:13:45.866 fused_ordering(48) 00:13:45.866 fused_ordering(49) 00:13:45.866 fused_ordering(50) 00:13:45.866 fused_ordering(51) 00:13:45.866 fused_ordering(52) 00:13:45.866 fused_ordering(53) 00:13:45.866 fused_ordering(54) 00:13:45.866 fused_ordering(55) 00:13:45.866 fused_ordering(56) 00:13:45.866 fused_ordering(57) 00:13:45.866 fused_ordering(58) 00:13:45.866 fused_ordering(59) 00:13:45.866 fused_ordering(60) 00:13:45.866 fused_ordering(61) 00:13:45.866 fused_ordering(62) 00:13:45.866 fused_ordering(63) 00:13:45.866 fused_ordering(64) 00:13:45.866 fused_ordering(65) 00:13:45.866 fused_ordering(66) 00:13:45.866 fused_ordering(67) 00:13:45.866 fused_ordering(68) 00:13:45.866 fused_ordering(69) 00:13:45.866 fused_ordering(70) 00:13:45.866 fused_ordering(71) 00:13:45.866 fused_ordering(72) 00:13:45.866 fused_ordering(73) 00:13:45.866 fused_ordering(74) 00:13:45.866 fused_ordering(75) 00:13:45.866 fused_ordering(76) 00:13:45.866 fused_ordering(77) 00:13:45.866 fused_ordering(78) 00:13:45.866 fused_ordering(79) 00:13:45.866 fused_ordering(80) 00:13:45.866 fused_ordering(81) 00:13:45.866 fused_ordering(82) 00:13:45.866 fused_ordering(83) 00:13:45.866 fused_ordering(84) 00:13:45.866 fused_ordering(85) 00:13:45.866 fused_ordering(86) 00:13:45.866 fused_ordering(87) 00:13:45.866 fused_ordering(88) 00:13:45.866 fused_ordering(89) 00:13:45.866 fused_ordering(90) 00:13:45.866 fused_ordering(91) 00:13:45.866 fused_ordering(92) 00:13:45.866 fused_ordering(93) 00:13:45.866 fused_ordering(94) 00:13:45.866 fused_ordering(95) 00:13:45.866 fused_ordering(96) 00:13:45.866 fused_ordering(97) 00:13:45.866 fused_ordering(98) 00:13:45.866 fused_ordering(99) 00:13:45.866 fused_ordering(100) 00:13:45.866 fused_ordering(101) 00:13:45.866 fused_ordering(102) 00:13:45.866 fused_ordering(103) 00:13:45.866 fused_ordering(104) 00:13:45.866 fused_ordering(105) 00:13:45.866 fused_ordering(106) 00:13:45.866 fused_ordering(107) 00:13:45.866 fused_ordering(108) 00:13:45.866 fused_ordering(109) 00:13:45.866 fused_ordering(110) 00:13:45.866 fused_ordering(111) 00:13:45.866 fused_ordering(112) 00:13:45.866 fused_ordering(113) 00:13:45.866 fused_ordering(114) 00:13:45.866 fused_ordering(115) 00:13:45.866 fused_ordering(116) 00:13:45.866 fused_ordering(117) 00:13:45.866 fused_ordering(118) 00:13:45.866 fused_ordering(119) 00:13:45.866 fused_ordering(120) 00:13:45.866 fused_ordering(121) 00:13:45.866 fused_ordering(122) 00:13:45.866 fused_ordering(123) 00:13:45.866 fused_ordering(124) 00:13:45.866 fused_ordering(125) 00:13:45.866 fused_ordering(126) 00:13:45.866 fused_ordering(127) 00:13:45.866 fused_ordering(128) 00:13:45.866 fused_ordering(129) 00:13:45.867 fused_ordering(130) 00:13:45.867 fused_ordering(131) 00:13:45.867 fused_ordering(132) 00:13:45.867 fused_ordering(133) 00:13:45.867 fused_ordering(134) 00:13:45.867 fused_ordering(135) 00:13:45.867 fused_ordering(136) 00:13:45.867 fused_ordering(137) 00:13:45.867 fused_ordering(138) 00:13:45.867 fused_ordering(139) 00:13:45.867 fused_ordering(140) 00:13:45.867 fused_ordering(141) 00:13:45.867 fused_ordering(142) 00:13:45.867 fused_ordering(143) 00:13:45.867 fused_ordering(144) 00:13:45.867 fused_ordering(145) 00:13:45.867 fused_ordering(146) 00:13:45.867 fused_ordering(147) 00:13:45.867 fused_ordering(148) 00:13:45.867 fused_ordering(149) 00:13:45.867 fused_ordering(150) 00:13:45.867 fused_ordering(151) 00:13:45.867 fused_ordering(152) 00:13:45.867 fused_ordering(153) 00:13:45.867 fused_ordering(154) 00:13:45.867 fused_ordering(155) 00:13:45.867 fused_ordering(156) 00:13:45.867 fused_ordering(157) 00:13:45.867 fused_ordering(158) 00:13:45.867 fused_ordering(159) 00:13:45.867 fused_ordering(160) 00:13:45.867 fused_ordering(161) 00:13:45.867 fused_ordering(162) 00:13:45.867 fused_ordering(163) 00:13:45.867 fused_ordering(164) 00:13:45.867 fused_ordering(165) 00:13:45.867 fused_ordering(166) 00:13:45.867 fused_ordering(167) 00:13:45.867 fused_ordering(168) 00:13:45.867 fused_ordering(169) 00:13:45.867 fused_ordering(170) 00:13:45.867 fused_ordering(171) 00:13:45.867 fused_ordering(172) 00:13:45.867 fused_ordering(173) 00:13:45.867 fused_ordering(174) 00:13:45.867 fused_ordering(175) 00:13:45.867 fused_ordering(176) 00:13:45.867 fused_ordering(177) 00:13:45.867 fused_ordering(178) 00:13:45.867 fused_ordering(179) 00:13:45.867 fused_ordering(180) 00:13:45.867 fused_ordering(181) 00:13:45.867 fused_ordering(182) 00:13:45.867 fused_ordering(183) 00:13:45.867 fused_ordering(184) 00:13:45.867 fused_ordering(185) 00:13:45.867 fused_ordering(186) 00:13:45.867 fused_ordering(187) 00:13:45.867 fused_ordering(188) 00:13:45.867 fused_ordering(189) 00:13:45.867 fused_ordering(190) 00:13:45.867 fused_ordering(191) 00:13:45.867 fused_ordering(192) 00:13:45.867 fused_ordering(193) 00:13:45.867 fused_ordering(194) 00:13:45.867 fused_ordering(195) 00:13:45.867 fused_ordering(196) 00:13:45.867 fused_ordering(197) 00:13:45.867 fused_ordering(198) 00:13:45.867 fused_ordering(199) 00:13:45.867 fused_ordering(200) 00:13:45.867 fused_ordering(201) 00:13:45.867 fused_ordering(202) 00:13:45.867 fused_ordering(203) 00:13:45.867 fused_ordering(204) 00:13:45.867 fused_ordering(205) 00:13:46.128 fused_ordering(206) 00:13:46.128 fused_ordering(207) 00:13:46.128 fused_ordering(208) 00:13:46.128 fused_ordering(209) 00:13:46.128 fused_ordering(210) 00:13:46.128 fused_ordering(211) 00:13:46.128 fused_ordering(212) 00:13:46.128 fused_ordering(213) 00:13:46.128 fused_ordering(214) 00:13:46.128 fused_ordering(215) 00:13:46.128 fused_ordering(216) 00:13:46.128 fused_ordering(217) 00:13:46.128 fused_ordering(218) 00:13:46.128 fused_ordering(219) 00:13:46.128 fused_ordering(220) 00:13:46.128 fused_ordering(221) 00:13:46.128 fused_ordering(222) 00:13:46.128 fused_ordering(223) 00:13:46.128 fused_ordering(224) 00:13:46.128 fused_ordering(225) 00:13:46.128 fused_ordering(226) 00:13:46.128 fused_ordering(227) 00:13:46.128 fused_ordering(228) 00:13:46.128 fused_ordering(229) 00:13:46.128 fused_ordering(230) 00:13:46.128 fused_ordering(231) 00:13:46.128 fused_ordering(232) 00:13:46.128 fused_ordering(233) 00:13:46.128 fused_ordering(234) 00:13:46.128 fused_ordering(235) 00:13:46.128 fused_ordering(236) 00:13:46.128 fused_ordering(237) 00:13:46.128 fused_ordering(238) 00:13:46.128 fused_ordering(239) 00:13:46.128 fused_ordering(240) 00:13:46.128 fused_ordering(241) 00:13:46.128 fused_ordering(242) 00:13:46.128 fused_ordering(243) 00:13:46.128 fused_ordering(244) 00:13:46.128 fused_ordering(245) 00:13:46.128 fused_ordering(246) 00:13:46.128 fused_ordering(247) 00:13:46.128 fused_ordering(248) 00:13:46.128 fused_ordering(249) 00:13:46.128 fused_ordering(250) 00:13:46.128 fused_ordering(251) 00:13:46.128 fused_ordering(252) 00:13:46.128 fused_ordering(253) 00:13:46.128 fused_ordering(254) 00:13:46.128 fused_ordering(255) 00:13:46.128 fused_ordering(256) 00:13:46.128 fused_ordering(257) 00:13:46.128 fused_ordering(258) 00:13:46.128 fused_ordering(259) 00:13:46.128 fused_ordering(260) 00:13:46.128 fused_ordering(261) 00:13:46.128 fused_ordering(262) 00:13:46.128 fused_ordering(263) 00:13:46.128 fused_ordering(264) 00:13:46.128 fused_ordering(265) 00:13:46.128 fused_ordering(266) 00:13:46.128 fused_ordering(267) 00:13:46.128 fused_ordering(268) 00:13:46.128 fused_ordering(269) 00:13:46.128 fused_ordering(270) 00:13:46.128 fused_ordering(271) 00:13:46.128 fused_ordering(272) 00:13:46.128 fused_ordering(273) 00:13:46.128 fused_ordering(274) 00:13:46.128 fused_ordering(275) 00:13:46.128 fused_ordering(276) 00:13:46.128 fused_ordering(277) 00:13:46.128 fused_ordering(278) 00:13:46.128 fused_ordering(279) 00:13:46.128 fused_ordering(280) 00:13:46.128 fused_ordering(281) 00:13:46.128 fused_ordering(282) 00:13:46.128 fused_ordering(283) 00:13:46.128 fused_ordering(284) 00:13:46.128 fused_ordering(285) 00:13:46.128 fused_ordering(286) 00:13:46.128 fused_ordering(287) 00:13:46.128 fused_ordering(288) 00:13:46.128 fused_ordering(289) 00:13:46.128 fused_ordering(290) 00:13:46.128 fused_ordering(291) 00:13:46.128 fused_ordering(292) 00:13:46.128 fused_ordering(293) 00:13:46.128 fused_ordering(294) 00:13:46.128 fused_ordering(295) 00:13:46.128 fused_ordering(296) 00:13:46.128 fused_ordering(297) 00:13:46.128 fused_ordering(298) 00:13:46.128 fused_ordering(299) 00:13:46.128 fused_ordering(300) 00:13:46.128 fused_ordering(301) 00:13:46.128 fused_ordering(302) 00:13:46.128 fused_ordering(303) 00:13:46.128 fused_ordering(304) 00:13:46.128 fused_ordering(305) 00:13:46.128 fused_ordering(306) 00:13:46.128 fused_ordering(307) 00:13:46.128 fused_ordering(308) 00:13:46.128 fused_ordering(309) 00:13:46.128 fused_ordering(310) 00:13:46.128 fused_ordering(311) 00:13:46.128 fused_ordering(312) 00:13:46.128 fused_ordering(313) 00:13:46.128 fused_ordering(314) 00:13:46.128 fused_ordering(315) 00:13:46.128 fused_ordering(316) 00:13:46.128 fused_ordering(317) 00:13:46.128 fused_ordering(318) 00:13:46.128 fused_ordering(319) 00:13:46.128 fused_ordering(320) 00:13:46.128 fused_ordering(321) 00:13:46.128 fused_ordering(322) 00:13:46.128 fused_ordering(323) 00:13:46.128 fused_ordering(324) 00:13:46.128 fused_ordering(325) 00:13:46.128 fused_ordering(326) 00:13:46.128 fused_ordering(327) 00:13:46.128 fused_ordering(328) 00:13:46.129 fused_ordering(329) 00:13:46.129 fused_ordering(330) 00:13:46.129 fused_ordering(331) 00:13:46.129 fused_ordering(332) 00:13:46.129 fused_ordering(333) 00:13:46.129 fused_ordering(334) 00:13:46.129 fused_ordering(335) 00:13:46.129 fused_ordering(336) 00:13:46.129 fused_ordering(337) 00:13:46.129 fused_ordering(338) 00:13:46.129 fused_ordering(339) 00:13:46.129 fused_ordering(340) 00:13:46.129 fused_ordering(341) 00:13:46.129 fused_ordering(342) 00:13:46.129 fused_ordering(343) 00:13:46.129 fused_ordering(344) 00:13:46.129 fused_ordering(345) 00:13:46.129 fused_ordering(346) 00:13:46.129 fused_ordering(347) 00:13:46.129 fused_ordering(348) 00:13:46.129 fused_ordering(349) 00:13:46.129 fused_ordering(350) 00:13:46.129 fused_ordering(351) 00:13:46.129 fused_ordering(352) 00:13:46.129 fused_ordering(353) 00:13:46.129 fused_ordering(354) 00:13:46.129 fused_ordering(355) 00:13:46.129 fused_ordering(356) 00:13:46.129 fused_ordering(357) 00:13:46.129 fused_ordering(358) 00:13:46.129 fused_ordering(359) 00:13:46.129 fused_ordering(360) 00:13:46.129 fused_ordering(361) 00:13:46.129 fused_ordering(362) 00:13:46.129 fused_ordering(363) 00:13:46.129 fused_ordering(364) 00:13:46.129 fused_ordering(365) 00:13:46.129 fused_ordering(366) 00:13:46.129 fused_ordering(367) 00:13:46.129 fused_ordering(368) 00:13:46.129 fused_ordering(369) 00:13:46.129 fused_ordering(370) 00:13:46.129 fused_ordering(371) 00:13:46.129 fused_ordering(372) 00:13:46.129 fused_ordering(373) 00:13:46.129 fused_ordering(374) 00:13:46.129 fused_ordering(375) 00:13:46.129 fused_ordering(376) 00:13:46.129 fused_ordering(377) 00:13:46.129 fused_ordering(378) 00:13:46.129 fused_ordering(379) 00:13:46.129 fused_ordering(380) 00:13:46.129 fused_ordering(381) 00:13:46.129 fused_ordering(382) 00:13:46.129 fused_ordering(383) 00:13:46.129 fused_ordering(384) 00:13:46.129 fused_ordering(385) 00:13:46.129 fused_ordering(386) 00:13:46.129 fused_ordering(387) 00:13:46.129 fused_ordering(388) 00:13:46.129 fused_ordering(389) 00:13:46.129 fused_ordering(390) 00:13:46.129 fused_ordering(391) 00:13:46.129 fused_ordering(392) 00:13:46.129 fused_ordering(393) 00:13:46.129 fused_ordering(394) 00:13:46.129 fused_ordering(395) 00:13:46.129 fused_ordering(396) 00:13:46.129 fused_ordering(397) 00:13:46.129 fused_ordering(398) 00:13:46.129 fused_ordering(399) 00:13:46.129 fused_ordering(400) 00:13:46.129 fused_ordering(401) 00:13:46.129 fused_ordering(402) 00:13:46.129 fused_ordering(403) 00:13:46.129 fused_ordering(404) 00:13:46.129 fused_ordering(405) 00:13:46.129 fused_ordering(406) 00:13:46.129 fused_ordering(407) 00:13:46.129 fused_ordering(408) 00:13:46.129 fused_ordering(409) 00:13:46.129 fused_ordering(410) 00:13:46.389 fused_ordering(411) 00:13:46.389 fused_ordering(412) 00:13:46.389 fused_ordering(413) 00:13:46.389 fused_ordering(414) 00:13:46.389 fused_ordering(415) 00:13:46.389 fused_ordering(416) 00:13:46.389 fused_ordering(417) 00:13:46.389 fused_ordering(418) 00:13:46.389 fused_ordering(419) 00:13:46.389 fused_ordering(420) 00:13:46.389 fused_ordering(421) 00:13:46.389 fused_ordering(422) 00:13:46.389 fused_ordering(423) 00:13:46.389 fused_ordering(424) 00:13:46.389 fused_ordering(425) 00:13:46.389 fused_ordering(426) 00:13:46.389 fused_ordering(427) 00:13:46.389 fused_ordering(428) 00:13:46.389 fused_ordering(429) 00:13:46.389 fused_ordering(430) 00:13:46.389 fused_ordering(431) 00:13:46.389 fused_ordering(432) 00:13:46.389 fused_ordering(433) 00:13:46.389 fused_ordering(434) 00:13:46.389 fused_ordering(435) 00:13:46.389 fused_ordering(436) 00:13:46.389 fused_ordering(437) 00:13:46.389 fused_ordering(438) 00:13:46.389 fused_ordering(439) 00:13:46.389 fused_ordering(440) 00:13:46.389 fused_ordering(441) 00:13:46.389 fused_ordering(442) 00:13:46.389 fused_ordering(443) 00:13:46.389 fused_ordering(444) 00:13:46.389 fused_ordering(445) 00:13:46.389 fused_ordering(446) 00:13:46.389 fused_ordering(447) 00:13:46.389 fused_ordering(448) 00:13:46.389 fused_ordering(449) 00:13:46.389 fused_ordering(450) 00:13:46.389 fused_ordering(451) 00:13:46.389 fused_ordering(452) 00:13:46.389 fused_ordering(453) 00:13:46.389 fused_ordering(454) 00:13:46.389 fused_ordering(455) 00:13:46.389 fused_ordering(456) 00:13:46.389 fused_ordering(457) 00:13:46.389 fused_ordering(458) 00:13:46.389 fused_ordering(459) 00:13:46.389 fused_ordering(460) 00:13:46.389 fused_ordering(461) 00:13:46.389 fused_ordering(462) 00:13:46.389 fused_ordering(463) 00:13:46.389 fused_ordering(464) 00:13:46.389 fused_ordering(465) 00:13:46.389 fused_ordering(466) 00:13:46.389 fused_ordering(467) 00:13:46.389 fused_ordering(468) 00:13:46.389 fused_ordering(469) 00:13:46.389 fused_ordering(470) 00:13:46.389 fused_ordering(471) 00:13:46.389 fused_ordering(472) 00:13:46.389 fused_ordering(473) 00:13:46.389 fused_ordering(474) 00:13:46.389 fused_ordering(475) 00:13:46.389 fused_ordering(476) 00:13:46.389 fused_ordering(477) 00:13:46.389 fused_ordering(478) 00:13:46.389 fused_ordering(479) 00:13:46.389 fused_ordering(480) 00:13:46.389 fused_ordering(481) 00:13:46.389 fused_ordering(482) 00:13:46.389 fused_ordering(483) 00:13:46.389 fused_ordering(484) 00:13:46.389 fused_ordering(485) 00:13:46.389 fused_ordering(486) 00:13:46.389 fused_ordering(487) 00:13:46.389 fused_ordering(488) 00:13:46.390 fused_ordering(489) 00:13:46.390 fused_ordering(490) 00:13:46.390 fused_ordering(491) 00:13:46.390 fused_ordering(492) 00:13:46.390 fused_ordering(493) 00:13:46.390 fused_ordering(494) 00:13:46.390 fused_ordering(495) 00:13:46.390 fused_ordering(496) 00:13:46.390 fused_ordering(497) 00:13:46.390 fused_ordering(498) 00:13:46.390 fused_ordering(499) 00:13:46.390 fused_ordering(500) 00:13:46.390 fused_ordering(501) 00:13:46.390 fused_ordering(502) 00:13:46.390 fused_ordering(503) 00:13:46.390 fused_ordering(504) 00:13:46.390 fused_ordering(505) 00:13:46.390 fused_ordering(506) 00:13:46.390 fused_ordering(507) 00:13:46.390 fused_ordering(508) 00:13:46.390 fused_ordering(509) 00:13:46.390 fused_ordering(510) 00:13:46.390 fused_ordering(511) 00:13:46.390 fused_ordering(512) 00:13:46.390 fused_ordering(513) 00:13:46.390 fused_ordering(514) 00:13:46.390 fused_ordering(515) 00:13:46.390 fused_ordering(516) 00:13:46.390 fused_ordering(517) 00:13:46.390 fused_ordering(518) 00:13:46.390 fused_ordering(519) 00:13:46.390 fused_ordering(520) 00:13:46.390 fused_ordering(521) 00:13:46.390 fused_ordering(522) 00:13:46.390 fused_ordering(523) 00:13:46.390 fused_ordering(524) 00:13:46.390 fused_ordering(525) 00:13:46.390 fused_ordering(526) 00:13:46.390 fused_ordering(527) 00:13:46.390 fused_ordering(528) 00:13:46.390 fused_ordering(529) 00:13:46.390 fused_ordering(530) 00:13:46.390 fused_ordering(531) 00:13:46.390 fused_ordering(532) 00:13:46.390 fused_ordering(533) 00:13:46.390 fused_ordering(534) 00:13:46.390 fused_ordering(535) 00:13:46.390 fused_ordering(536) 00:13:46.390 fused_ordering(537) 00:13:46.390 fused_ordering(538) 00:13:46.390 fused_ordering(539) 00:13:46.390 fused_ordering(540) 00:13:46.390 fused_ordering(541) 00:13:46.390 fused_ordering(542) 00:13:46.390 fused_ordering(543) 00:13:46.390 fused_ordering(544) 00:13:46.390 fused_ordering(545) 00:13:46.390 fused_ordering(546) 00:13:46.390 fused_ordering(547) 00:13:46.390 fused_ordering(548) 00:13:46.390 fused_ordering(549) 00:13:46.390 fused_ordering(550) 00:13:46.390 fused_ordering(551) 00:13:46.390 fused_ordering(552) 00:13:46.390 fused_ordering(553) 00:13:46.390 fused_ordering(554) 00:13:46.390 fused_ordering(555) 00:13:46.390 fused_ordering(556) 00:13:46.390 fused_ordering(557) 00:13:46.390 fused_ordering(558) 00:13:46.390 fused_ordering(559) 00:13:46.390 fused_ordering(560) 00:13:46.390 fused_ordering(561) 00:13:46.390 fused_ordering(562) 00:13:46.390 fused_ordering(563) 00:13:46.390 fused_ordering(564) 00:13:46.390 fused_ordering(565) 00:13:46.390 fused_ordering(566) 00:13:46.390 fused_ordering(567) 00:13:46.390 fused_ordering(568) 00:13:46.390 fused_ordering(569) 00:13:46.390 fused_ordering(570) 00:13:46.390 fused_ordering(571) 00:13:46.390 fused_ordering(572) 00:13:46.390 fused_ordering(573) 00:13:46.390 fused_ordering(574) 00:13:46.390 fused_ordering(575) 00:13:46.390 fused_ordering(576) 00:13:46.390 fused_ordering(577) 00:13:46.390 fused_ordering(578) 00:13:46.390 fused_ordering(579) 00:13:46.390 fused_ordering(580) 00:13:46.390 fused_ordering(581) 00:13:46.390 fused_ordering(582) 00:13:46.390 fused_ordering(583) 00:13:46.390 fused_ordering(584) 00:13:46.390 fused_ordering(585) 00:13:46.390 fused_ordering(586) 00:13:46.390 fused_ordering(587) 00:13:46.390 fused_ordering(588) 00:13:46.390 fused_ordering(589) 00:13:46.390 fused_ordering(590) 00:13:46.390 fused_ordering(591) 00:13:46.390 fused_ordering(592) 00:13:46.390 fused_ordering(593) 00:13:46.390 fused_ordering(594) 00:13:46.390 fused_ordering(595) 00:13:46.390 fused_ordering(596) 00:13:46.390 fused_ordering(597) 00:13:46.390 fused_ordering(598) 00:13:46.390 fused_ordering(599) 00:13:46.390 fused_ordering(600) 00:13:46.390 fused_ordering(601) 00:13:46.390 fused_ordering(602) 00:13:46.390 fused_ordering(603) 00:13:46.390 fused_ordering(604) 00:13:46.390 fused_ordering(605) 00:13:46.390 fused_ordering(606) 00:13:46.390 fused_ordering(607) 00:13:46.390 fused_ordering(608) 00:13:46.390 fused_ordering(609) 00:13:46.390 fused_ordering(610) 00:13:46.390 fused_ordering(611) 00:13:46.390 fused_ordering(612) 00:13:46.390 fused_ordering(613) 00:13:46.390 fused_ordering(614) 00:13:46.390 fused_ordering(615) 00:13:46.961 fused_ordering(616) 00:13:46.961 fused_ordering(617) 00:13:46.961 fused_ordering(618) 00:13:46.961 fused_ordering(619) 00:13:46.961 fused_ordering(620) 00:13:46.961 fused_ordering(621) 00:13:46.961 fused_ordering(622) 00:13:46.961 fused_ordering(623) 00:13:46.961 fused_ordering(624) 00:13:46.961 fused_ordering(625) 00:13:46.961 fused_ordering(626) 00:13:46.961 fused_ordering(627) 00:13:46.961 fused_ordering(628) 00:13:46.961 fused_ordering(629) 00:13:46.961 fused_ordering(630) 00:13:46.961 fused_ordering(631) 00:13:46.961 fused_ordering(632) 00:13:46.961 fused_ordering(633) 00:13:46.962 fused_ordering(634) 00:13:46.962 fused_ordering(635) 00:13:46.962 fused_ordering(636) 00:13:46.962 fused_ordering(637) 00:13:46.962 fused_ordering(638) 00:13:46.962 fused_ordering(639) 00:13:46.962 fused_ordering(640) 00:13:46.962 fused_ordering(641) 00:13:46.962 fused_ordering(642) 00:13:46.962 fused_ordering(643) 00:13:46.962 fused_ordering(644) 00:13:46.962 fused_ordering(645) 00:13:46.962 fused_ordering(646) 00:13:46.962 fused_ordering(647) 00:13:46.962 fused_ordering(648) 00:13:46.962 fused_ordering(649) 00:13:46.962 fused_ordering(650) 00:13:46.962 fused_ordering(651) 00:13:46.962 fused_ordering(652) 00:13:46.962 fused_ordering(653) 00:13:46.962 fused_ordering(654) 00:13:46.962 fused_ordering(655) 00:13:46.962 fused_ordering(656) 00:13:46.962 fused_ordering(657) 00:13:46.962 fused_ordering(658) 00:13:46.962 fused_ordering(659) 00:13:46.962 fused_ordering(660) 00:13:46.962 fused_ordering(661) 00:13:46.962 fused_ordering(662) 00:13:46.962 fused_ordering(663) 00:13:46.962 fused_ordering(664) 00:13:46.962 fused_ordering(665) 00:13:46.962 fused_ordering(666) 00:13:46.962 fused_ordering(667) 00:13:46.962 fused_ordering(668) 00:13:46.962 fused_ordering(669) 00:13:46.962 fused_ordering(670) 00:13:46.962 fused_ordering(671) 00:13:46.962 fused_ordering(672) 00:13:46.962 fused_ordering(673) 00:13:46.962 fused_ordering(674) 00:13:46.962 fused_ordering(675) 00:13:46.962 fused_ordering(676) 00:13:46.962 fused_ordering(677) 00:13:46.962 fused_ordering(678) 00:13:46.962 fused_ordering(679) 00:13:46.962 fused_ordering(680) 00:13:46.962 fused_ordering(681) 00:13:46.962 fused_ordering(682) 00:13:46.962 fused_ordering(683) 00:13:46.962 fused_ordering(684) 00:13:46.962 fused_ordering(685) 00:13:46.962 fused_ordering(686) 00:13:46.962 fused_ordering(687) 00:13:46.962 fused_ordering(688) 00:13:46.962 fused_ordering(689) 00:13:46.962 fused_ordering(690) 00:13:46.962 fused_ordering(691) 00:13:46.962 fused_ordering(692) 00:13:46.962 fused_ordering(693) 00:13:46.962 fused_ordering(694) 00:13:46.962 fused_ordering(695) 00:13:46.962 fused_ordering(696) 00:13:46.962 fused_ordering(697) 00:13:46.962 fused_ordering(698) 00:13:46.962 fused_ordering(699) 00:13:46.962 fused_ordering(700) 00:13:46.962 fused_ordering(701) 00:13:46.962 fused_ordering(702) 00:13:46.962 fused_ordering(703) 00:13:46.962 fused_ordering(704) 00:13:46.962 fused_ordering(705) 00:13:46.962 fused_ordering(706) 00:13:46.962 fused_ordering(707) 00:13:46.962 fused_ordering(708) 00:13:46.962 fused_ordering(709) 00:13:46.962 fused_ordering(710) 00:13:46.962 fused_ordering(711) 00:13:46.962 fused_ordering(712) 00:13:46.962 fused_ordering(713) 00:13:46.962 fused_ordering(714) 00:13:46.962 fused_ordering(715) 00:13:46.962 fused_ordering(716) 00:13:46.962 fused_ordering(717) 00:13:46.962 fused_ordering(718) 00:13:46.962 fused_ordering(719) 00:13:46.962 fused_ordering(720) 00:13:46.962 fused_ordering(721) 00:13:46.962 fused_ordering(722) 00:13:46.962 fused_ordering(723) 00:13:46.962 fused_ordering(724) 00:13:46.962 fused_ordering(725) 00:13:46.962 fused_ordering(726) 00:13:46.962 fused_ordering(727) 00:13:46.962 fused_ordering(728) 00:13:46.962 fused_ordering(729) 00:13:46.962 fused_ordering(730) 00:13:46.962 fused_ordering(731) 00:13:46.962 fused_ordering(732) 00:13:46.962 fused_ordering(733) 00:13:46.962 fused_ordering(734) 00:13:46.962 fused_ordering(735) 00:13:46.962 fused_ordering(736) 00:13:46.962 fused_ordering(737) 00:13:46.962 fused_ordering(738) 00:13:46.962 fused_ordering(739) 00:13:46.962 fused_ordering(740) 00:13:46.962 fused_ordering(741) 00:13:46.962 fused_ordering(742) 00:13:46.962 fused_ordering(743) 00:13:46.962 fused_ordering(744) 00:13:46.962 fused_ordering(745) 00:13:46.962 fused_ordering(746) 00:13:46.962 fused_ordering(747) 00:13:46.962 fused_ordering(748) 00:13:46.962 fused_ordering(749) 00:13:46.962 fused_ordering(750) 00:13:46.962 fused_ordering(751) 00:13:46.962 fused_ordering(752) 00:13:46.962 fused_ordering(753) 00:13:46.962 fused_ordering(754) 00:13:46.962 fused_ordering(755) 00:13:46.962 fused_ordering(756) 00:13:46.962 fused_ordering(757) 00:13:46.962 fused_ordering(758) 00:13:46.962 fused_ordering(759) 00:13:46.962 fused_ordering(760) 00:13:46.962 fused_ordering(761) 00:13:46.962 fused_ordering(762) 00:13:46.962 fused_ordering(763) 00:13:46.962 fused_ordering(764) 00:13:46.962 fused_ordering(765) 00:13:46.962 fused_ordering(766) 00:13:46.962 fused_ordering(767) 00:13:46.962 fused_ordering(768) 00:13:46.962 fused_ordering(769) 00:13:46.962 fused_ordering(770) 00:13:46.962 fused_ordering(771) 00:13:46.962 fused_ordering(772) 00:13:46.962 fused_ordering(773) 00:13:46.962 fused_ordering(774) 00:13:46.962 fused_ordering(775) 00:13:46.962 fused_ordering(776) 00:13:46.962 fused_ordering(777) 00:13:46.962 fused_ordering(778) 00:13:46.962 fused_ordering(779) 00:13:46.962 fused_ordering(780) 00:13:46.962 fused_ordering(781) 00:13:46.962 fused_ordering(782) 00:13:46.962 fused_ordering(783) 00:13:46.962 fused_ordering(784) 00:13:46.962 fused_ordering(785) 00:13:46.962 fused_ordering(786) 00:13:46.962 fused_ordering(787) 00:13:46.962 fused_ordering(788) 00:13:46.962 fused_ordering(789) 00:13:46.962 fused_ordering(790) 00:13:46.962 fused_ordering(791) 00:13:46.962 fused_ordering(792) 00:13:46.962 fused_ordering(793) 00:13:46.962 fused_ordering(794) 00:13:46.962 fused_ordering(795) 00:13:46.962 fused_ordering(796) 00:13:46.962 fused_ordering(797) 00:13:46.962 fused_ordering(798) 00:13:46.962 fused_ordering(799) 00:13:46.962 fused_ordering(800) 00:13:46.962 fused_ordering(801) 00:13:46.962 fused_ordering(802) 00:13:46.962 fused_ordering(803) 00:13:46.962 fused_ordering(804) 00:13:46.962 fused_ordering(805) 00:13:46.962 fused_ordering(806) 00:13:46.962 fused_ordering(807) 00:13:46.962 fused_ordering(808) 00:13:46.962 fused_ordering(809) 00:13:46.962 fused_ordering(810) 00:13:46.962 fused_ordering(811) 00:13:46.962 fused_ordering(812) 00:13:46.962 fused_ordering(813) 00:13:46.962 fused_ordering(814) 00:13:46.962 fused_ordering(815) 00:13:46.962 fused_ordering(816) 00:13:46.962 fused_ordering(817) 00:13:46.962 fused_ordering(818) 00:13:46.962 fused_ordering(819) 00:13:46.962 fused_ordering(820) 00:13:47.535 fused_ordering(821) 00:13:47.535 fused_ordering(822) 00:13:47.535 fused_ordering(823) 00:13:47.535 fused_ordering(824) 00:13:47.535 fused_ordering(825) 00:13:47.535 fused_ordering(826) 00:13:47.535 fused_ordering(827) 00:13:47.535 fused_ordering(828) 00:13:47.535 fused_ordering(829) 00:13:47.535 fused_ordering(830) 00:13:47.535 fused_ordering(831) 00:13:47.535 fused_ordering(832) 00:13:47.535 fused_ordering(833) 00:13:47.535 fused_ordering(834) 00:13:47.535 fused_ordering(835) 00:13:47.535 fused_ordering(836) 00:13:47.535 fused_ordering(837) 00:13:47.535 fused_ordering(838) 00:13:47.535 fused_ordering(839) 00:13:47.535 fused_ordering(840) 00:13:47.535 fused_ordering(841) 00:13:47.535 fused_ordering(842) 00:13:47.535 fused_ordering(843) 00:13:47.535 fused_ordering(844) 00:13:47.535 fused_ordering(845) 00:13:47.535 fused_ordering(846) 00:13:47.535 fused_ordering(847) 00:13:47.535 fused_ordering(848) 00:13:47.535 fused_ordering(849) 00:13:47.535 fused_ordering(850) 00:13:47.535 fused_ordering(851) 00:13:47.535 fused_ordering(852) 00:13:47.535 fused_ordering(853) 00:13:47.535 fused_ordering(854) 00:13:47.535 fused_ordering(855) 00:13:47.535 fused_ordering(856) 00:13:47.535 fused_ordering(857) 00:13:47.535 fused_ordering(858) 00:13:47.535 fused_ordering(859) 00:13:47.535 fused_ordering(860) 00:13:47.535 fused_ordering(861) 00:13:47.535 fused_ordering(862) 00:13:47.535 fused_ordering(863) 00:13:47.535 fused_ordering(864) 00:13:47.535 fused_ordering(865) 00:13:47.535 fused_ordering(866) 00:13:47.535 fused_ordering(867) 00:13:47.535 fused_ordering(868) 00:13:47.535 fused_ordering(869) 00:13:47.535 fused_ordering(870) 00:13:47.535 fused_ordering(871) 00:13:47.535 fused_ordering(872) 00:13:47.535 fused_ordering(873) 00:13:47.535 fused_ordering(874) 00:13:47.535 fused_ordering(875) 00:13:47.535 fused_ordering(876) 00:13:47.535 fused_ordering(877) 00:13:47.535 fused_ordering(878) 00:13:47.535 fused_ordering(879) 00:13:47.535 fused_ordering(880) 00:13:47.535 fused_ordering(881) 00:13:47.535 fused_ordering(882) 00:13:47.535 fused_ordering(883) 00:13:47.535 fused_ordering(884) 00:13:47.535 fused_ordering(885) 00:13:47.535 fused_ordering(886) 00:13:47.535 fused_ordering(887) 00:13:47.535 fused_ordering(888) 00:13:47.535 fused_ordering(889) 00:13:47.535 fused_ordering(890) 00:13:47.535 fused_ordering(891) 00:13:47.535 fused_ordering(892) 00:13:47.535 fused_ordering(893) 00:13:47.535 fused_ordering(894) 00:13:47.535 fused_ordering(895) 00:13:47.535 fused_ordering(896) 00:13:47.535 fused_ordering(897) 00:13:47.535 fused_ordering(898) 00:13:47.535 fused_ordering(899) 00:13:47.535 fused_ordering(900) 00:13:47.535 fused_ordering(901) 00:13:47.535 fused_ordering(902) 00:13:47.535 fused_ordering(903) 00:13:47.535 fused_ordering(904) 00:13:47.535 fused_ordering(905) 00:13:47.535 fused_ordering(906) 00:13:47.535 fused_ordering(907) 00:13:47.535 fused_ordering(908) 00:13:47.535 fused_ordering(909) 00:13:47.535 fused_ordering(910) 00:13:47.535 fused_ordering(911) 00:13:47.535 fused_ordering(912) 00:13:47.535 fused_ordering(913) 00:13:47.535 fused_ordering(914) 00:13:47.535 fused_ordering(915) 00:13:47.535 fused_ordering(916) 00:13:47.535 fused_ordering(917) 00:13:47.535 fused_ordering(918) 00:13:47.535 fused_ordering(919) 00:13:47.535 fused_ordering(920) 00:13:47.535 fused_ordering(921) 00:13:47.535 fused_ordering(922) 00:13:47.535 fused_ordering(923) 00:13:47.535 fused_ordering(924) 00:13:47.535 fused_ordering(925) 00:13:47.535 fused_ordering(926) 00:13:47.535 fused_ordering(927) 00:13:47.535 fused_ordering(928) 00:13:47.535 fused_ordering(929) 00:13:47.535 fused_ordering(930) 00:13:47.535 fused_ordering(931) 00:13:47.535 fused_ordering(932) 00:13:47.535 fused_ordering(933) 00:13:47.535 fused_ordering(934) 00:13:47.535 fused_ordering(935) 00:13:47.535 fused_ordering(936) 00:13:47.535 fused_ordering(937) 00:13:47.535 fused_ordering(938) 00:13:47.535 fused_ordering(939) 00:13:47.535 fused_ordering(940) 00:13:47.535 fused_ordering(941) 00:13:47.535 fused_ordering(942) 00:13:47.535 fused_ordering(943) 00:13:47.535 fused_ordering(944) 00:13:47.535 fused_ordering(945) 00:13:47.535 fused_ordering(946) 00:13:47.535 fused_ordering(947) 00:13:47.535 fused_ordering(948) 00:13:47.535 fused_ordering(949) 00:13:47.535 fused_ordering(950) 00:13:47.535 fused_ordering(951) 00:13:47.535 fused_ordering(952) 00:13:47.535 fused_ordering(953) 00:13:47.535 fused_ordering(954) 00:13:47.535 fused_ordering(955) 00:13:47.536 fused_ordering(956) 00:13:47.536 fused_ordering(957) 00:13:47.536 fused_ordering(958) 00:13:47.536 fused_ordering(959) 00:13:47.536 fused_ordering(960) 00:13:47.536 fused_ordering(961) 00:13:47.536 fused_ordering(962) 00:13:47.536 fused_ordering(963) 00:13:47.536 fused_ordering(964) 00:13:47.536 fused_ordering(965) 00:13:47.536 fused_ordering(966) 00:13:47.536 fused_ordering(967) 00:13:47.536 fused_ordering(968) 00:13:47.536 fused_ordering(969) 00:13:47.536 fused_ordering(970) 00:13:47.536 fused_ordering(971) 00:13:47.536 fused_ordering(972) 00:13:47.536 fused_ordering(973) 00:13:47.536 fused_ordering(974) 00:13:47.536 fused_ordering(975) 00:13:47.536 fused_ordering(976) 00:13:47.536 fused_ordering(977) 00:13:47.536 fused_ordering(978) 00:13:47.536 fused_ordering(979) 00:13:47.536 fused_ordering(980) 00:13:47.536 fused_ordering(981) 00:13:47.536 fused_ordering(982) 00:13:47.536 fused_ordering(983) 00:13:47.536 fused_ordering(984) 00:13:47.536 fused_ordering(985) 00:13:47.536 fused_ordering(986) 00:13:47.536 fused_ordering(987) 00:13:47.536 fused_ordering(988) 00:13:47.536 fused_ordering(989) 00:13:47.536 fused_ordering(990) 00:13:47.536 fused_ordering(991) 00:13:47.536 fused_ordering(992) 00:13:47.536 fused_ordering(993) 00:13:47.536 fused_ordering(994) 00:13:47.536 fused_ordering(995) 00:13:47.536 fused_ordering(996) 00:13:47.536 fused_ordering(997) 00:13:47.536 fused_ordering(998) 00:13:47.536 fused_ordering(999) 00:13:47.536 fused_ordering(1000) 00:13:47.536 fused_ordering(1001) 00:13:47.536 fused_ordering(1002) 00:13:47.536 fused_ordering(1003) 00:13:47.536 fused_ordering(1004) 00:13:47.536 fused_ordering(1005) 00:13:47.536 fused_ordering(1006) 00:13:47.536 fused_ordering(1007) 00:13:47.536 fused_ordering(1008) 00:13:47.536 fused_ordering(1009) 00:13:47.536 fused_ordering(1010) 00:13:47.536 fused_ordering(1011) 00:13:47.536 fused_ordering(1012) 00:13:47.536 fused_ordering(1013) 00:13:47.536 fused_ordering(1014) 00:13:47.536 fused_ordering(1015) 00:13:47.536 fused_ordering(1016) 00:13:47.536 fused_ordering(1017) 00:13:47.536 fused_ordering(1018) 00:13:47.536 fused_ordering(1019) 00:13:47.536 fused_ordering(1020) 00:13:47.536 fused_ordering(1021) 00:13:47.536 fused_ordering(1022) 00:13:47.536 fused_ordering(1023) 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:47.536 rmmod nvme_tcp 00:13:47.536 rmmod nvme_fabrics 00:13:47.536 rmmod nvme_keyring 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3357277 ']' 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3357277 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3357277 ']' 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3357277 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3357277 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3357277' 00:13:47.536 killing process with pid 3357277 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3357277 00:13:47.536 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3357277 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.797 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.342 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:50.342 00:13:50.342 real 0m14.453s 00:13:50.342 user 0m7.470s 00:13:50.342 sys 0m7.782s 00:13:50.342 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.342 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.342 ************************************ 00:13:50.342 END TEST nvmf_fused_ordering 00:13:50.342 ************************************ 00:13:50.342 11:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:50.342 11:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:50.342 11:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.342 11:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:50.342 ************************************ 00:13:50.342 START TEST nvmf_ns_masking 00:13:50.342 ************************************ 00:13:50.342 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:50.342 * Looking for test storage... 00:13:50.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.342 --rc genhtml_branch_coverage=1 00:13:50.342 --rc genhtml_function_coverage=1 00:13:50.342 --rc genhtml_legend=1 00:13:50.342 --rc geninfo_all_blocks=1 00:13:50.342 --rc geninfo_unexecuted_blocks=1 00:13:50.342 00:13:50.342 ' 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.342 --rc genhtml_branch_coverage=1 00:13:50.342 --rc genhtml_function_coverage=1 00:13:50.342 --rc genhtml_legend=1 00:13:50.342 --rc geninfo_all_blocks=1 00:13:50.342 --rc geninfo_unexecuted_blocks=1 00:13:50.342 00:13:50.342 ' 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.342 --rc genhtml_branch_coverage=1 00:13:50.342 --rc genhtml_function_coverage=1 00:13:50.342 --rc genhtml_legend=1 00:13:50.342 --rc geninfo_all_blocks=1 00:13:50.342 --rc geninfo_unexecuted_blocks=1 00:13:50.342 00:13:50.342 ' 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.342 --rc genhtml_branch_coverage=1 00:13:50.342 --rc genhtml_function_coverage=1 00:13:50.342 --rc genhtml_legend=1 00:13:50.342 --rc geninfo_all_blocks=1 00:13:50.342 --rc geninfo_unexecuted_blocks=1 00:13:50.342 00:13:50.342 ' 00:13:50.342 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:50.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e4efd6a8-094c-4157-b3fa-e744ce5561ea 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6c09c0a9-d501-4af4-92ab-2c9837b2e3d2 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3f97e2f6-4f8e-4694-addb-1ed77f6a0fd7 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:50.343 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:58.490 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:58.491 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:58.491 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:58.491 Found net devices under 0000:31:00.0: cvl_0_0 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:58.491 Found net devices under 0000:31:00.1: cvl_0_1 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.491 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:58.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:13:58.492 00:13:58.492 --- 10.0.0.2 ping statistics --- 00:13:58.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.492 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:13:58.492 00:13:58.492 --- 10.0.0.1 ping statistics --- 00:13:58.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.492 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:58.492 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3362668 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3362668 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3362668 ']' 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.752 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:58.752 [2024-12-06 11:13:04.732208] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:58.752 [2024-12-06 11:13:04.732259] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.752 [2024-12-06 11:13:04.820087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.752 [2024-12-06 11:13:04.856051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.752 [2024-12-06 11:13:04.856086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.752 [2024-12-06 11:13:04.856095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.752 [2024-12-06 11:13:04.856101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.752 [2024-12-06 11:13:04.856107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.752 [2024-12-06 11:13:04.856723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.492 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.492 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:59.492 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:59.492 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:59.492 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:59.492 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.492 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:59.752 [2024-12-06 11:13:05.722808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.752 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:59.752 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:59.752 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:59.752 Malloc1 00:13:59.752 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:00.011 Malloc2 00:14:00.011 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:00.271 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:00.271 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.530 [2024-12-06 11:13:06.565672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.530 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:00.530 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3f97e2f6-4f8e-4694-addb-1ed77f6a0fd7 -a 10.0.0.2 -s 4420 -i 4 00:14:00.790 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:00.790 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:00.790 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.790 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:00.790 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:02.699 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:02.699 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:02.699 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.699 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:02.699 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.699 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:02.699 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:02.699 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:02.959 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:02.959 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:02.959 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:02.959 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.959 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.959 [ 0]:0x1 00:14:02.959 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.959 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.959 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9759a873075444578ca9a1892656bbf4 00:14:02.959 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9759a873075444578ca9a1892656bbf4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.959 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:02.959 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:02.959 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.959 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.959 [ 0]:0x1 00:14:02.959 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.959 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.221 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9759a873075444578ca9a1892656bbf4 00:14:03.221 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9759a873075444578ca9a1892656bbf4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.221 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:03.221 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.221 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.221 [ 1]:0x2 00:14:03.221 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.221 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.221 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7dee592b1ef4499581a699314c8f6c3b 00:14:03.221 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7dee592b1ef4499581a699314c8f6c3b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.221 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:03.221 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:03.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.483 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.745 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:03.745 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:03.745 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3f97e2f6-4f8e-4694-addb-1ed77f6a0fd7 -a 10.0.0.2 -s 4420 -i 4 00:14:04.006 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:04.006 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:04.006 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.006 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:04.006 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:04.006 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.048 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.309 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:06.309 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.309 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:06.309 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:06.310 [ 0]:0x2 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7dee592b1ef4499581a699314c8f6c3b 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7dee592b1ef4499581a699314c8f6c3b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.310 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.571 [ 0]:0x1 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9759a873075444578ca9a1892656bbf4 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9759a873075444578ca9a1892656bbf4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:06.571 [ 1]:0x2 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7dee592b1ef4499581a699314c8f6c3b 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7dee592b1ef4499581a699314c8f6c3b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.571 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:06.833 [ 0]:0x2 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7dee592b1ef4499581a699314c8f6c3b 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7dee592b1ef4499581a699314c8f6c3b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.833 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:07.094 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:07.094 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3f97e2f6-4f8e-4694-addb-1ed77f6a0fd7 -a 10.0.0.2 -s 4420 -i 4 00:14:07.355 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:07.355 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:07.355 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.355 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:07.355 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:07.355 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:09.270 [ 0]:0x1 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:09.270 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.531 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9759a873075444578ca9a1892656bbf4 00:14:09.531 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9759a873075444578ca9a1892656bbf4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.531 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:09.531 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.531 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:09.531 [ 1]:0x2 00:14:09.531 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:09.531 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.531 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7dee592b1ef4499581a699314c8f6c3b 00:14:09.531 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7dee592b1ef4499581a699314c8f6c3b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.531 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:09.791 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:09.791 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:09.791 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:09.791 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:09.791 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.791 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:09.791 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.791 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:09.791 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.791 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:09.792 [ 0]:0x2 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7dee592b1ef4499581a699314c8f6c3b 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7dee592b1ef4499581a699314c8f6c3b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:09.792 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:10.053 [2024-12-06 11:13:15.996913] nvmf_rpc.c:1895:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:10.053 request: 00:14:10.053 { 00:14:10.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.053 "nsid": 2, 00:14:10.053 "host": "nqn.2016-06.io.spdk:host1", 00:14:10.053 "method": "nvmf_ns_remove_host", 00:14:10.053 "req_id": 1 00:14:10.053 } 00:14:10.053 Got JSON-RPC error response 00:14:10.053 response: 00:14:10.053 { 00:14:10.053 "code": -32602, 00:14:10.053 "message": "Invalid parameters" 00:14:10.053 } 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.053 [ 0]:0x2 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7dee592b1ef4499581a699314c8f6c3b 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7dee592b1ef4499581a699314c8f6c3b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3365171 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3365171 /var/tmp/host.sock 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3365171 ']' 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:10.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.053 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.313 [2024-12-06 11:13:16.276418] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:14:10.313 [2024-12-06 11:13:16.276473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3365171 ] 00:14:10.313 [2024-12-06 11:13:16.372609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.313 [2024-12-06 11:13:16.408519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.884 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.884 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:10.884 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.144 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.423 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e4efd6a8-094c-4157-b3fa-e744ce5561ea 00:14:11.423 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:11.423 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E4EFD6A8094C4157B3FAE744CE5561EA -i 00:14:11.423 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6c09c0a9-d501-4af4-92ab-2c9837b2e3d2 00:14:11.423 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:11.423 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6C09C0A9D5014AF492AB2C9837B2E3D2 -i 00:14:11.684 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:11.945 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:11.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:11.945 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:12.205 nvme0n1 00:14:12.205 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:12.205 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:12.467 nvme1n2 00:14:12.467 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:12.467 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:12.467 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:12.467 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:12.467 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:12.727 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:12.727 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:12.727 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:12.727 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:12.987 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e4efd6a8-094c-4157-b3fa-e744ce5561ea == \e\4\e\f\d\6\a\8\-\0\9\4\c\-\4\1\5\7\-\b\3\f\a\-\e\7\4\4\c\e\5\5\6\1\e\a ]] 00:14:12.987 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:12.987 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:12.987 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:12.988 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6c09c0a9-d501-4af4-92ab-2c9837b2e3d2 == \6\c\0\9\c\0\a\9\-\d\5\0\1\-\4\a\f\4\-\9\2\a\b\-\2\c\9\8\3\7\b\2\e\3\d\2 ]] 00:14:12.988 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.248 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:13.248 [2024-12-06 11:13:19.402906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:2 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:13.248 [2024-12-06 11:13:19.402944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:14:13.248 [2024-12-06 11:13:19.402959] nvme_ns.c: 287:nvme_ctrlr_identify_id_desc: *WARNING*: Failed to retrieve NS ID Descriptor List 00:14:13.509 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid e4efd6a8-094c-4157-b3fa-e744ce5561ea 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E4EFD6A8094C4157B3FAE744CE5561EA 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E4EFD6A8094C4157B3FAE744CE5561EA 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E4EFD6A8094C4157B3FAE744CE5561EA 00:14:13.510 [2024-12-06 11:13:19.570807] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:13.510 [2024-12-06 11:13:19.570838] subsystem.c:2310:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:13.510 [2024-12-06 11:13:19.570847] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.510 request: 00:14:13.510 { 00:14:13.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.510 "namespace": { 00:14:13.510 "bdev_name": "invalid", 00:14:13.510 "nsid": 1, 00:14:13.510 "nguid": "E4EFD6A8094C4157B3FAE744CE5561EA", 00:14:13.510 "no_auto_visible": false, 00:14:13.510 "hide_metadata": false 00:14:13.510 }, 00:14:13.510 "method": "nvmf_subsystem_add_ns", 00:14:13.510 "req_id": 1 00:14:13.510 } 00:14:13.510 Got JSON-RPC error response 00:14:13.510 response: 00:14:13.510 { 00:14:13.510 "code": -32602, 00:14:13.510 "message": "Invalid parameters" 00:14:13.510 } 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid e4efd6a8-094c-4157-b3fa-e744ce5561ea 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:13.510 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E4EFD6A8094C4157B3FAE744CE5561EA -i 00:14:13.770 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:15.684 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:15.684 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:15.684 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:15.945 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:15.945 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3365171 00:14:15.945 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3365171 ']' 00:14:15.945 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3365171 00:14:15.945 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:15.945 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.945 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3365171 00:14:15.945 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:15.945 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:15.945 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3365171' 00:14:15.945 killing process with pid 3365171 00:14:15.945 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3365171 00:14:15.945 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3365171 00:14:16.207 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.207 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:16.468 rmmod nvme_tcp 00:14:16.468 rmmod nvme_fabrics 00:14:16.468 rmmod nvme_keyring 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3362668 ']' 00:14:16.468 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3362668 00:14:16.469 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3362668 ']' 00:14:16.469 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3362668 00:14:16.469 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:16.469 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.469 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3362668 00:14:16.469 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.469 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.469 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3362668' 00:14:16.469 killing process with pid 3362668 00:14:16.469 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3362668 00:14:16.469 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3362668 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.729 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.643 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:18.643 00:14:18.643 real 0m28.763s 00:14:18.643 user 0m31.416s 00:14:18.643 sys 0m8.805s 00:14:18.643 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.643 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.643 ************************************ 00:14:18.643 END TEST nvmf_ns_masking 00:14:18.643 ************************************ 00:14:18.643 11:13:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:18.643 11:13:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:18.643 11:13:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:18.643 11:13:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.643 11:13:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.905 ************************************ 00:14:18.905 START TEST nvmf_nvme_cli 00:14:18.905 ************************************ 00:14:18.905 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:18.905 * Looking for test storage... 00:14:18.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.905 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:18.905 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:18.905 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:18.905 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:18.905 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.905 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.905 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.905 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.905 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.905 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.905 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.905 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:18.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.906 --rc genhtml_branch_coverage=1 00:14:18.906 --rc genhtml_function_coverage=1 00:14:18.906 --rc genhtml_legend=1 00:14:18.906 --rc geninfo_all_blocks=1 00:14:18.906 --rc geninfo_unexecuted_blocks=1 00:14:18.906 00:14:18.906 ' 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:18.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.906 --rc genhtml_branch_coverage=1 00:14:18.906 --rc genhtml_function_coverage=1 00:14:18.906 --rc genhtml_legend=1 00:14:18.906 --rc geninfo_all_blocks=1 00:14:18.906 --rc geninfo_unexecuted_blocks=1 00:14:18.906 00:14:18.906 ' 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:18.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.906 --rc genhtml_branch_coverage=1 00:14:18.906 --rc genhtml_function_coverage=1 00:14:18.906 --rc genhtml_legend=1 00:14:18.906 --rc geninfo_all_blocks=1 00:14:18.906 --rc geninfo_unexecuted_blocks=1 00:14:18.906 00:14:18.906 ' 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:18.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.906 --rc genhtml_branch_coverage=1 00:14:18.906 --rc genhtml_function_coverage=1 00:14:18.906 --rc genhtml_legend=1 00:14:18.906 --rc geninfo_all_blocks=1 00:14:18.906 --rc geninfo_unexecuted_blocks=1 00:14:18.906 00:14:18.906 ' 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:18.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:18.906 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:19.168 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.168 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:19.168 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:19.168 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:19.168 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.168 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.168 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.168 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:19.168 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:19.168 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:19.168 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.304 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:27.305 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:27.305 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:27.305 Found net devices under 0000:31:00.0: cvl_0_0 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:27.305 Found net devices under 0000:31:00.1: cvl_0_1 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:27.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:14:27.305 00:14:27.305 --- 10.0.0.2 ping statistics --- 00:14:27.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.305 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:14:27.305 00:14:27.305 --- 10.0.0.1 ping statistics --- 00:14:27.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.305 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:27.305 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3371107 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3371107 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3371107 ']' 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.567 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.567 [2024-12-06 11:13:33.565835] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:14:27.567 [2024-12-06 11:13:33.565908] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.567 [2024-12-06 11:13:33.657671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.567 [2024-12-06 11:13:33.701202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.567 [2024-12-06 11:13:33.701241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.567 [2024-12-06 11:13:33.701249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.567 [2024-12-06 11:13:33.701256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.567 [2024-12-06 11:13:33.701261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.567 [2024-12-06 11:13:33.703167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.567 [2024-12-06 11:13:33.703284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.567 [2024-12-06 11:13:33.703444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.567 [2024-12-06 11:13:33.703444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.512 [2024-12-06 11:13:34.431575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.512 Malloc0 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.512 Malloc1 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.512 [2024-12-06 11:13:34.535883] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.512 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:14:28.773 00:14:28.773 Discovery Log Number of Records 2, Generation counter 2 00:14:28.773 =====Discovery Log Entry 0====== 00:14:28.773 trtype: tcp 00:14:28.773 adrfam: ipv4 00:14:28.773 subtype: current discovery subsystem 00:14:28.773 treq: not required 00:14:28.773 portid: 0 00:14:28.773 trsvcid: 4420 00:14:28.773 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:28.773 traddr: 10.0.0.2 00:14:28.773 eflags: explicit discovery connections, duplicate discovery information 00:14:28.773 sectype: none 00:14:28.773 =====Discovery Log Entry 1====== 00:14:28.773 trtype: tcp 00:14:28.773 adrfam: ipv4 00:14:28.773 subtype: nvme subsystem 00:14:28.773 treq: not required 00:14:28.773 portid: 0 00:14:28.773 trsvcid: 4420 00:14:28.773 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:28.773 traddr: 10.0.0.2 00:14:28.773 eflags: none 00:14:28.773 sectype: none 00:14:28.773 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:28.773 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:28.773 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:28.773 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.773 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:28.773 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:28.773 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.773 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:28.773 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.773 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:28.773 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.157 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:30.157 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:30.157 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.157 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:30.157 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:30.157 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:32.699 /dev/nvme0n2 ]] 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:32.699 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:32.959 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:32.960 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:32.960 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:32.960 rmmod nvme_tcp 00:14:32.960 rmmod nvme_fabrics 00:14:32.960 rmmod nvme_keyring 00:14:32.960 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:32.960 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:32.960 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:32.960 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3371107 ']' 00:14:32.960 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3371107 00:14:32.960 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3371107 ']' 00:14:32.960 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3371107 00:14:32.960 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:32.960 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.960 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3371107 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3371107' 00:14:33.221 killing process with pid 3371107 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3371107 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3371107 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.221 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:35.768 00:14:35.768 real 0m16.517s 00:14:35.768 user 0m24.711s 00:14:35.768 sys 0m7.068s 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.768 ************************************ 00:14:35.768 END TEST nvmf_nvme_cli 00:14:35.768 ************************************ 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.768 ************************************ 00:14:35.768 START TEST nvmf_vfio_user 00:14:35.768 ************************************ 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:35.768 * Looking for test storage... 00:14:35.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:35.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.768 --rc genhtml_branch_coverage=1 00:14:35.768 --rc genhtml_function_coverage=1 00:14:35.768 --rc genhtml_legend=1 00:14:35.768 --rc geninfo_all_blocks=1 00:14:35.768 --rc geninfo_unexecuted_blocks=1 00:14:35.768 00:14:35.768 ' 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:35.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.768 --rc genhtml_branch_coverage=1 00:14:35.768 --rc genhtml_function_coverage=1 00:14:35.768 --rc genhtml_legend=1 00:14:35.768 --rc geninfo_all_blocks=1 00:14:35.768 --rc geninfo_unexecuted_blocks=1 00:14:35.768 00:14:35.768 ' 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:35.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.768 --rc genhtml_branch_coverage=1 00:14:35.768 --rc genhtml_function_coverage=1 00:14:35.768 --rc genhtml_legend=1 00:14:35.768 --rc geninfo_all_blocks=1 00:14:35.768 --rc geninfo_unexecuted_blocks=1 00:14:35.768 00:14:35.768 ' 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:35.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.768 --rc genhtml_branch_coverage=1 00:14:35.768 --rc genhtml_function_coverage=1 00:14:35.768 --rc genhtml_legend=1 00:14:35.768 --rc geninfo_all_blocks=1 00:14:35.768 --rc geninfo_unexecuted_blocks=1 00:14:35.768 00:14:35.768 ' 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:35.768 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:35.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3372759 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3372759' 00:14:35.769 Process pid: 3372759 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3372759 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3372759 ']' 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.769 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:35.769 [2024-12-06 11:13:41.733185] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:14:35.769 [2024-12-06 11:13:41.733261] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.769 [2024-12-06 11:13:41.816123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.769 [2024-12-06 11:13:41.857543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.769 [2024-12-06 11:13:41.857581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.769 [2024-12-06 11:13:41.857589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.769 [2024-12-06 11:13:41.857596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.769 [2024-12-06 11:13:41.857602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.769 [2024-12-06 11:13:41.859459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.769 [2024-12-06 11:13:41.859576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.769 [2024-12-06 11:13:41.859733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.769 [2024-12-06 11:13:41.859734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.714 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.714 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:36.714 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:37.657 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:37.657 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:37.657 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:37.657 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:37.657 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:37.657 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:37.918 Malloc1 00:14:37.918 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:38.180 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:38.180 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:38.442 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:38.442 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:38.442 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:38.704 Malloc2 00:14:38.704 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:38.965 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:38.965 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:39.227 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:39.227 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:39.227 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:39.227 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:39.227 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:39.227 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:39.227 [2024-12-06 11:13:45.270085] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:14:39.227 [2024-12-06 11:13:45.270128] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3373450 ] 00:14:39.227 [2024-12-06 11:13:45.322408] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:39.227 [2024-12-06 11:13:45.331210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:39.227 [2024-12-06 11:13:45.331234] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc16ba4c000 00:14:39.227 [2024-12-06 11:13:45.332214] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.227 [2024-12-06 11:13:45.333211] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.227 [2024-12-06 11:13:45.334214] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.227 [2024-12-06 11:13:45.335226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:39.227 [2024-12-06 11:13:45.336221] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:39.227 [2024-12-06 11:13:45.337237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.227 [2024-12-06 11:13:45.338244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:39.227 [2024-12-06 11:13:45.339240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.227 [2024-12-06 11:13:45.340253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:39.227 [2024-12-06 11:13:45.340264] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc16ba41000 00:14:39.227 [2024-12-06 11:13:45.341593] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:39.227 [2024-12-06 11:13:45.362503] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:39.227 [2024-12-06 11:13:45.362528] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:39.227 [2024-12-06 11:13:45.365477] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:39.227 [2024-12-06 11:13:45.365568] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:39.227 [2024-12-06 11:13:45.365659] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:39.227 [2024-12-06 11:13:45.365675] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:39.228 [2024-12-06 11:13:45.365681] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:39.228 [2024-12-06 11:13:45.366387] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:39.228 [2024-12-06 11:13:45.366398] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:39.228 [2024-12-06 11:13:45.366406] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:39.228 [2024-12-06 11:13:45.367398] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:39.228 [2024-12-06 11:13:45.367408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:39.228 [2024-12-06 11:13:45.367416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:39.228 [2024-12-06 11:13:45.368398] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:39.228 [2024-12-06 11:13:45.368408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:39.228 [2024-12-06 11:13:45.369402] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:39.228 [2024-12-06 11:13:45.369411] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:39.228 [2024-12-06 11:13:45.369419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:39.228 [2024-12-06 11:13:45.369430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:39.228 [2024-12-06 11:13:45.369541] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:39.228 [2024-12-06 11:13:45.369546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:39.228 [2024-12-06 11:13:45.369551] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:39.228 [2024-12-06 11:13:45.370404] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:39.228 [2024-12-06 11:13:45.371411] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:39.228 [2024-12-06 11:13:45.372415] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:39.228 [2024-12-06 11:13:45.373415] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:39.228 [2024-12-06 11:13:45.373471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:39.228 [2024-12-06 11:13:45.374428] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:39.228 [2024-12-06 11:13:45.374437] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:39.228 [2024-12-06 11:13:45.374443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374464] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:39.228 [2024-12-06 11:13:45.374473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374494] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.228 [2024-12-06 11:13:45.374500] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.228 [2024-12-06 11:13:45.374504] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.228 [2024-12-06 11:13:45.374517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.228 [2024-12-06 11:13:45.374554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:39.228 [2024-12-06 11:13:45.374564] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:39.228 [2024-12-06 11:13:45.374571] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:39.228 [2024-12-06 11:13:45.374576] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:39.228 [2024-12-06 11:13:45.374581] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:39.228 [2024-12-06 11:13:45.374586] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:39.228 [2024-12-06 11:13:45.374590] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:39.228 [2024-12-06 11:13:45.374595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:39.228 [2024-12-06 11:13:45.374623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:39.228 [2024-12-06 11:13:45.374634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.228 [2024-12-06 11:13:45.374643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.228 [2024-12-06 11:13:45.374652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.228 [2024-12-06 11:13:45.374660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.228 [2024-12-06 11:13:45.374665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:39.228 [2024-12-06 11:13:45.374695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:39.228 [2024-12-06 11:13:45.374701] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:39.228 [2024-12-06 11:13:45.374707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:39.228 [2024-12-06 11:13:45.374736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:39.228 [2024-12-06 11:13:45.374798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:39.228 [2024-12-06 11:13:45.374819] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:39.228 [2024-12-06 11:13:45.374822] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.228 [2024-12-06 11:13:45.374828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:39.228 [2024-12-06 11:13:45.374842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:39.228 [2024-12-06 11:13:45.374852] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:39.228 [2024-12-06 11:13:45.374869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374884] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.228 [2024-12-06 11:13:45.374889] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.228 [2024-12-06 11:13:45.374892] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.228 [2024-12-06 11:13:45.374898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.228 [2024-12-06 11:13:45.374919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:39.228 [2024-12-06 11:13:45.374932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374947] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.228 [2024-12-06 11:13:45.374952] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.228 [2024-12-06 11:13:45.374955] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.228 [2024-12-06 11:13:45.374961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.228 [2024-12-06 11:13:45.374971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:39.228 [2024-12-06 11:13:45.374979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:39.228 [2024-12-06 11:13:45.374993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:39.229 [2024-12-06 11:13:45.375001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:39.229 [2024-12-06 11:13:45.375007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:39.229 [2024-12-06 11:13:45.375012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:39.229 [2024-12-06 11:13:45.375018] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:39.229 [2024-12-06 11:13:45.375023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:39.229 [2024-12-06 11:13:45.375028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:39.229 [2024-12-06 11:13:45.375048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:39.229 [2024-12-06 11:13:45.375058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:39.229 [2024-12-06 11:13:45.375075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:39.229 [2024-12-06 11:13:45.375082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:39.229 [2024-12-06 11:13:45.375093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:39.229 [2024-12-06 11:13:45.375101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:39.229 [2024-12-06 11:13:45.375113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:39.229 [2024-12-06 11:13:45.375124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:39.229 [2024-12-06 11:13:45.375138] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:39.229 [2024-12-06 11:13:45.375142] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:39.229 [2024-12-06 11:13:45.375146] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:39.229 [2024-12-06 11:13:45.375150] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:39.229 [2024-12-06 11:13:45.375153] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:39.229 [2024-12-06 11:13:45.375159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:39.229 [2024-12-06 11:13:45.375167] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:39.229 [2024-12-06 11:13:45.375172] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:39.229 [2024-12-06 11:13:45.375175] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.229 [2024-12-06 11:13:45.375181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:39.229 [2024-12-06 11:13:45.375188] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:39.229 [2024-12-06 11:13:45.375193] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.229 [2024-12-06 11:13:45.375196] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.229 [2024-12-06 11:13:45.375202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.229 [2024-12-06 11:13:45.375210] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:39.229 [2024-12-06 11:13:45.375214] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:39.229 [2024-12-06 11:13:45.375218] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.229 [2024-12-06 11:13:45.375223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:39.229 [2024-12-06 11:13:45.375230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:39.229 [2024-12-06 11:13:45.375242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:39.229 [2024-12-06 11:13:45.375253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:39.229 [2024-12-06 11:13:45.375260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:39.229 ===================================================== 00:14:39.229 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:39.229 ===================================================== 00:14:39.229 Controller Capabilities/Features 00:14:39.229 ================================ 00:14:39.229 Vendor ID: 4e58 00:14:39.229 Subsystem Vendor ID: 4e58 00:14:39.229 Serial Number: SPDK1 00:14:39.229 Model Number: SPDK bdev Controller 00:14:39.229 Firmware Version: 25.01 00:14:39.229 Recommended Arb Burst: 6 00:14:39.229 IEEE OUI Identifier: 8d 6b 50 00:14:39.229 Multi-path I/O 00:14:39.229 May have multiple subsystem ports: Yes 00:14:39.229 May have multiple controllers: Yes 00:14:39.229 Associated with SR-IOV VF: No 00:14:39.229 Max Data Transfer Size: 131072 00:14:39.229 Max Number of Namespaces: 32 00:14:39.229 Max Number of I/O Queues: 127 00:14:39.229 NVMe Specification Version (VS): 1.3 00:14:39.229 NVMe Specification Version (Identify): 1.3 00:14:39.229 Maximum Queue Entries: 256 00:14:39.229 Contiguous Queues Required: Yes 00:14:39.229 Arbitration Mechanisms Supported 00:14:39.229 Weighted Round Robin: Not Supported 00:14:39.229 Vendor Specific: Not Supported 00:14:39.229 Reset Timeout: 15000 ms 00:14:39.229 Doorbell Stride: 4 bytes 00:14:39.229 NVM Subsystem Reset: Not Supported 00:14:39.229 Command Sets Supported 00:14:39.229 NVM Command Set: Supported 00:14:39.229 Boot Partition: Not Supported 00:14:39.229 Memory Page Size Minimum: 4096 bytes 00:14:39.229 Memory Page Size Maximum: 4096 bytes 00:14:39.229 Persistent Memory Region: Not Supported 00:14:39.229 Optional Asynchronous Events Supported 00:14:39.229 Namespace Attribute Notices: Supported 00:14:39.229 Firmware Activation Notices: Not Supported 00:14:39.229 ANA Change Notices: Not Supported 00:14:39.229 PLE Aggregate Log Change Notices: Not Supported 00:14:39.229 LBA Status Info Alert Notices: Not Supported 00:14:39.229 EGE Aggregate Log Change Notices: Not Supported 00:14:39.229 Normal NVM Subsystem Shutdown event: Not Supported 00:14:39.229 Zone Descriptor Change Notices: Not Supported 00:14:39.229 Discovery Log Change Notices: Not Supported 00:14:39.229 Controller Attributes 00:14:39.229 128-bit Host Identifier: Supported 00:14:39.229 Non-Operational Permissive Mode: Not Supported 00:14:39.229 NVM Sets: Not Supported 00:14:39.229 Read Recovery Levels: Not Supported 00:14:39.229 Endurance Groups: Not Supported 00:14:39.229 Predictable Latency Mode: Not Supported 00:14:39.229 Traffic Based Keep ALive: Not Supported 00:14:39.229 Namespace Granularity: Not Supported 00:14:39.229 SQ Associations: Not Supported 00:14:39.229 UUID List: Not Supported 00:14:39.229 Multi-Domain Subsystem: Not Supported 00:14:39.229 Fixed Capacity Management: Not Supported 00:14:39.229 Variable Capacity Management: Not Supported 00:14:39.229 Delete Endurance Group: Not Supported 00:14:39.229 Delete NVM Set: Not Supported 00:14:39.229 Extended LBA Formats Supported: Not Supported 00:14:39.229 Flexible Data Placement Supported: Not Supported 00:14:39.229 00:14:39.229 Controller Memory Buffer Support 00:14:39.229 ================================ 00:14:39.229 Supported: No 00:14:39.229 00:14:39.229 Persistent Memory Region Support 00:14:39.229 ================================ 00:14:39.229 Supported: No 00:14:39.229 00:14:39.229 Admin Command Set Attributes 00:14:39.229 ============================ 00:14:39.229 Security Send/Receive: Not Supported 00:14:39.229 Format NVM: Not Supported 00:14:39.229 Firmware Activate/Download: Not Supported 00:14:39.229 Namespace Management: Not Supported 00:14:39.229 Device Self-Test: Not Supported 00:14:39.229 Directives: Not Supported 00:14:39.229 NVMe-MI: Not Supported 00:14:39.229 Virtualization Management: Not Supported 00:14:39.229 Doorbell Buffer Config: Not Supported 00:14:39.229 Get LBA Status Capability: Not Supported 00:14:39.229 Command & Feature Lockdown Capability: Not Supported 00:14:39.229 Abort Command Limit: 4 00:14:39.229 Async Event Request Limit: 4 00:14:39.229 Number of Firmware Slots: N/A 00:14:39.229 Firmware Slot 1 Read-Only: N/A 00:14:39.229 Firmware Activation Without Reset: N/A 00:14:39.229 Multiple Update Detection Support: N/A 00:14:39.229 Firmware Update Granularity: No Information Provided 00:14:39.229 Per-Namespace SMART Log: No 00:14:39.229 Asymmetric Namespace Access Log Page: Not Supported 00:14:39.229 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:39.229 Command Effects Log Page: Supported 00:14:39.229 Get Log Page Extended Data: Supported 00:14:39.229 Telemetry Log Pages: Not Supported 00:14:39.229 Persistent Event Log Pages: Not Supported 00:14:39.229 Supported Log Pages Log Page: May Support 00:14:39.229 Commands Supported & Effects Log Page: Not Supported 00:14:39.229 Feature Identifiers & Effects Log Page:May Support 00:14:39.229 NVMe-MI Commands & Effects Log Page: May Support 00:14:39.229 Data Area 4 for Telemetry Log: Not Supported 00:14:39.229 Error Log Page Entries Supported: 128 00:14:39.229 Keep Alive: Supported 00:14:39.229 Keep Alive Granularity: 10000 ms 00:14:39.229 00:14:39.229 NVM Command Set Attributes 00:14:39.230 ========================== 00:14:39.230 Submission Queue Entry Size 00:14:39.230 Max: 64 00:14:39.230 Min: 64 00:14:39.230 Completion Queue Entry Size 00:14:39.230 Max: 16 00:14:39.230 Min: 16 00:14:39.230 Number of Namespaces: 32 00:14:39.230 Compare Command: Supported 00:14:39.230 Write Uncorrectable Command: Not Supported 00:14:39.230 Dataset Management Command: Supported 00:14:39.230 Write Zeroes Command: Supported 00:14:39.230 Set Features Save Field: Not Supported 00:14:39.230 Reservations: Not Supported 00:14:39.230 Timestamp: Not Supported 00:14:39.230 Copy: Supported 00:14:39.230 Volatile Write Cache: Present 00:14:39.230 Atomic Write Unit (Normal): 1 00:14:39.230 Atomic Write Unit (PFail): 1 00:14:39.230 Atomic Compare & Write Unit: 1 00:14:39.230 Fused Compare & Write: Supported 00:14:39.230 Scatter-Gather List 00:14:39.230 SGL Command Set: Supported (Dword aligned) 00:14:39.230 SGL Keyed: Not Supported 00:14:39.230 SGL Bit Bucket Descriptor: Not Supported 00:14:39.230 SGL Metadata Pointer: Not Supported 00:14:39.230 Oversized SGL: Not Supported 00:14:39.230 SGL Metadata Address: Not Supported 00:14:39.230 SGL Offset: Not Supported 00:14:39.230 Transport SGL Data Block: Not Supported 00:14:39.230 Replay Protected Memory Block: Not Supported 00:14:39.230 00:14:39.230 Firmware Slot Information 00:14:39.230 ========================= 00:14:39.230 Active slot: 1 00:14:39.230 Slot 1 Firmware Revision: 25.01 00:14:39.230 00:14:39.230 00:14:39.230 Commands Supported and Effects 00:14:39.230 ============================== 00:14:39.230 Admin Commands 00:14:39.230 -------------- 00:14:39.230 Get Log Page (02h): Supported 00:14:39.230 Identify (06h): Supported 00:14:39.230 Abort (08h): Supported 00:14:39.230 Set Features (09h): Supported 00:14:39.230 Get Features (0Ah): Supported 00:14:39.230 Asynchronous Event Request (0Ch): Supported 00:14:39.230 Keep Alive (18h): Supported 00:14:39.230 I/O Commands 00:14:39.230 ------------ 00:14:39.230 Flush (00h): Supported LBA-Change 00:14:39.230 Write (01h): Supported LBA-Change 00:14:39.230 Read (02h): Supported 00:14:39.230 Compare (05h): Supported 00:14:39.230 Write Zeroes (08h): Supported LBA-Change 00:14:39.230 Dataset Management (09h): Supported LBA-Change 00:14:39.230 Copy (19h): Supported LBA-Change 00:14:39.230 00:14:39.230 Error Log 00:14:39.230 ========= 00:14:39.230 00:14:39.230 Arbitration 00:14:39.230 =========== 00:14:39.230 Arbitration Burst: 1 00:14:39.230 00:14:39.230 Power Management 00:14:39.230 ================ 00:14:39.230 Number of Power States: 1 00:14:39.230 Current Power State: Power State #0 00:14:39.230 Power State #0: 00:14:39.230 Max Power: 0.00 W 00:14:39.230 Non-Operational State: Operational 00:14:39.230 Entry Latency: Not Reported 00:14:39.230 Exit Latency: Not Reported 00:14:39.230 Relative Read Throughput: 0 00:14:39.230 Relative Read Latency: 0 00:14:39.230 Relative Write Throughput: 0 00:14:39.230 Relative Write Latency: 0 00:14:39.230 Idle Power: Not Reported 00:14:39.230 Active Power: Not Reported 00:14:39.230 Non-Operational Permissive Mode: Not Supported 00:14:39.230 00:14:39.230 Health Information 00:14:39.230 ================== 00:14:39.230 Critical Warnings: 00:14:39.230 Available Spare Space: OK 00:14:39.230 Temperature: OK 00:14:39.230 Device Reliability: OK 00:14:39.230 Read Only: No 00:14:39.230 Volatile Memory Backup: OK 00:14:39.230 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:39.230 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:39.230 Available Spare: 0% 00:14:39.230 Available Sp[2024-12-06 11:13:45.375362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:39.230 [2024-12-06 11:13:45.375371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:39.230 [2024-12-06 11:13:45.375402] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:39.230 [2024-12-06 11:13:45.375412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.230 [2024-12-06 11:13:45.375418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.230 [2024-12-06 11:13:45.375425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.230 [2024-12-06 11:13:45.375431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.230 [2024-12-06 11:13:45.376445] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:39.230 [2024-12-06 11:13:45.376458] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:39.230 [2024-12-06 11:13:45.377441] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:39.230 [2024-12-06 11:13:45.377483] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:39.230 [2024-12-06 11:13:45.377490] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:39.230 [2024-12-06 11:13:45.378449] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:39.230 [2024-12-06 11:13:45.378461] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:39.230 [2024-12-06 11:13:45.378526] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:39.230 [2024-12-06 11:13:45.382871] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:39.491 are Threshold: 0% 00:14:39.491 Life Percentage Used: 0% 00:14:39.491 Data Units Read: 0 00:14:39.491 Data Units Written: 0 00:14:39.491 Host Read Commands: 0 00:14:39.491 Host Write Commands: 0 00:14:39.491 Controller Busy Time: 0 minutes 00:14:39.491 Power Cycles: 0 00:14:39.491 Power On Hours: 0 hours 00:14:39.491 Unsafe Shutdowns: 0 00:14:39.491 Unrecoverable Media Errors: 0 00:14:39.491 Lifetime Error Log Entries: 0 00:14:39.491 Warning Temperature Time: 0 minutes 00:14:39.491 Critical Temperature Time: 0 minutes 00:14:39.491 00:14:39.491 Number of Queues 00:14:39.491 ================ 00:14:39.491 Number of I/O Submission Queues: 127 00:14:39.491 Number of I/O Completion Queues: 127 00:14:39.491 00:14:39.491 Active Namespaces 00:14:39.491 ================= 00:14:39.491 Namespace ID:1 00:14:39.491 Error Recovery Timeout: Unlimited 00:14:39.491 Command Set Identifier: NVM (00h) 00:14:39.491 Deallocate: Supported 00:14:39.491 Deallocated/Unwritten Error: Not Supported 00:14:39.491 Deallocated Read Value: Unknown 00:14:39.491 Deallocate in Write Zeroes: Not Supported 00:14:39.491 Deallocated Guard Field: 0xFFFF 00:14:39.491 Flush: Supported 00:14:39.491 Reservation: Supported 00:14:39.491 Namespace Sharing Capabilities: Multiple Controllers 00:14:39.491 Size (in LBAs): 131072 (0GiB) 00:14:39.491 Capacity (in LBAs): 131072 (0GiB) 00:14:39.491 Utilization (in LBAs): 131072 (0GiB) 00:14:39.491 NGUID: 03227D817B7D442F8E46B6097F0AFF1D 00:14:39.491 UUID: 03227d81-7b7d-442f-8e46-b6097f0aff1d 00:14:39.491 Thin Provisioning: Not Supported 00:14:39.491 Per-NS Atomic Units: Yes 00:14:39.491 Atomic Boundary Size (Normal): 0 00:14:39.491 Atomic Boundary Size (PFail): 0 00:14:39.491 Atomic Boundary Offset: 0 00:14:39.491 Maximum Single Source Range Length: 65535 00:14:39.491 Maximum Copy Length: 65535 00:14:39.491 Maximum Source Range Count: 1 00:14:39.491 NGUID/EUI64 Never Reused: No 00:14:39.491 Namespace Write Protected: No 00:14:39.491 Number of LBA Formats: 1 00:14:39.491 Current LBA Format: LBA Format #00 00:14:39.491 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:39.491 00:14:39.491 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:39.491 [2024-12-06 11:13:45.578553] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.772 Initializing NVMe Controllers 00:14:44.772 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:44.772 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:44.772 Initialization complete. Launching workers. 00:14:44.772 ======================================================== 00:14:44.772 Latency(us) 00:14:44.772 Device Information : IOPS MiB/s Average min max 00:14:44.772 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40010.05 156.29 3199.41 867.42 6877.85 00:14:44.772 ======================================================== 00:14:44.772 Total : 40010.05 156.29 3199.41 867.42 6877.85 00:14:44.772 00:14:44.772 [2024-12-06 11:13:50.599311] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.772 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:44.772 [2024-12-06 11:13:50.789186] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:50.060 Initializing NVMe Controllers 00:14:50.060 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:50.060 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:50.060 Initialization complete. Launching workers. 00:14:50.060 ======================================================== 00:14:50.060 Latency(us) 00:14:50.060 Device Information : IOPS MiB/s Average min max 00:14:50.060 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15968.00 62.38 8022.34 3164.94 15962.45 00:14:50.060 ======================================================== 00:14:50.060 Total : 15968.00 62.38 8022.34 3164.94 15962.45 00:14:50.060 00:14:50.060 [2024-12-06 11:13:55.825376] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:50.060 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:50.060 [2024-12-06 11:13:56.046343] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.353 [2024-12-06 11:14:01.126094] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.353 Initializing NVMe Controllers 00:14:55.353 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.353 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.353 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:55.353 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:55.353 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:55.353 Initialization complete. Launching workers. 00:14:55.353 Starting thread on core 2 00:14:55.353 Starting thread on core 3 00:14:55.353 Starting thread on core 1 00:14:55.353 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:55.353 [2024-12-06 11:14:01.411933] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:58.654 [2024-12-06 11:14:04.460606] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:58.654 Initializing NVMe Controllers 00:14:58.654 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:58.654 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:58.654 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:58.654 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:58.654 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:58.654 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:58.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:58.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:58.654 Initialization complete. Launching workers. 00:14:58.654 Starting thread on core 1 with urgent priority queue 00:14:58.654 Starting thread on core 2 with urgent priority queue 00:14:58.654 Starting thread on core 3 with urgent priority queue 00:14:58.654 Starting thread on core 0 with urgent priority queue 00:14:58.654 SPDK bdev Controller (SPDK1 ) core 0: 8572.00 IO/s 11.67 secs/100000 ios 00:14:58.654 SPDK bdev Controller (SPDK1 ) core 1: 8575.67 IO/s 11.66 secs/100000 ios 00:14:58.654 SPDK bdev Controller (SPDK1 ) core 2: 7598.00 IO/s 13.16 secs/100000 ios 00:14:58.654 SPDK bdev Controller (SPDK1 ) core 3: 9850.00 IO/s 10.15 secs/100000 ios 00:14:58.654 ======================================================== 00:14:58.654 00:14:58.654 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:58.654 [2024-12-06 11:14:04.760278] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:58.654 Initializing NVMe Controllers 00:14:58.654 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:58.654 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:58.654 Namespace ID: 1 size: 0GB 00:14:58.654 Initialization complete. 00:14:58.654 INFO: using host memory buffer for IO 00:14:58.654 Hello world! 00:14:58.654 [2024-12-06 11:14:04.795506] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:58.914 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:59.173 [2024-12-06 11:14:05.093302] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.113 Initializing NVMe Controllers 00:15:00.113 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.113 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.113 Initialization complete. Launching workers. 00:15:00.113 submit (in ns) avg, min, max = 8359.6, 3907.5, 3999811.7 00:15:00.113 complete (in ns) avg, min, max = 18822.3, 2382.5, 4994281.7 00:15:00.113 00:15:00.113 Submit histogram 00:15:00.113 ================ 00:15:00.113 Range in us Cumulative Count 00:15:00.113 3.893 - 3.920: 0.3603% ( 67) 00:15:00.113 3.920 - 3.947: 3.1993% ( 528) 00:15:00.113 3.947 - 3.973: 10.4742% ( 1353) 00:15:00.113 3.973 - 4.000: 21.9271% ( 2130) 00:15:00.113 4.000 - 4.027: 34.2617% ( 2294) 00:15:00.113 4.027 - 4.053: 46.1286% ( 2207) 00:15:00.113 4.053 - 4.080: 62.8670% ( 3113) 00:15:00.113 4.080 - 4.107: 79.0461% ( 3009) 00:15:00.113 4.107 - 4.133: 89.7623% ( 1993) 00:15:00.113 4.133 - 4.160: 95.6447% ( 1094) 00:15:00.113 4.160 - 4.187: 98.0804% ( 453) 00:15:00.113 4.187 - 4.213: 99.0859% ( 187) 00:15:00.113 4.213 - 4.240: 99.3601% ( 51) 00:15:00.113 4.240 - 4.267: 99.4408% ( 15) 00:15:00.113 4.267 - 4.293: 99.4516% ( 2) 00:15:00.113 4.293 - 4.320: 99.4677% ( 3) 00:15:00.113 4.373 - 4.400: 99.4731% ( 1) 00:15:00.113 4.453 - 4.480: 99.4784% ( 1) 00:15:00.113 4.533 - 4.560: 99.4838% ( 1) 00:15:00.113 4.560 - 4.587: 99.4892% ( 1) 00:15:00.113 4.720 - 4.747: 99.4946% ( 1) 00:15:00.113 4.880 - 4.907: 99.4999% ( 1) 00:15:00.113 5.013 - 5.040: 99.5053% ( 1) 00:15:00.113 5.253 - 5.280: 99.5107% ( 1) 00:15:00.113 5.520 - 5.547: 99.5161% ( 1) 00:15:00.113 5.760 - 5.787: 99.5215% ( 1) 00:15:00.113 5.840 - 5.867: 99.5268% ( 1) 00:15:00.113 5.867 - 5.893: 99.5322% ( 1) 00:15:00.113 5.893 - 5.920: 99.5430% ( 2) 00:15:00.113 5.920 - 5.947: 99.5591% ( 3) 00:15:00.113 5.947 - 5.973: 99.5752% ( 3) 00:15:00.113 5.973 - 6.000: 99.5806% ( 1) 00:15:00.113 6.000 - 6.027: 99.5914% ( 2) 00:15:00.113 6.027 - 6.053: 99.5967% ( 1) 00:15:00.113 6.053 - 6.080: 99.6075% ( 2) 00:15:00.113 6.080 - 6.107: 99.6236% ( 3) 00:15:00.113 6.107 - 6.133: 99.6290% ( 1) 00:15:00.113 6.133 - 6.160: 99.6505% ( 4) 00:15:00.113 6.160 - 6.187: 99.6613% ( 2) 00:15:00.113 6.187 - 6.213: 99.6774% ( 3) 00:15:00.113 6.213 - 6.240: 99.6828% ( 1) 00:15:00.113 6.240 - 6.267: 99.6989% ( 3) 00:15:00.113 6.267 - 6.293: 99.7043% ( 1) 00:15:00.113 6.293 - 6.320: 99.7096% ( 1) 00:15:00.113 6.320 - 6.347: 99.7258% ( 3) 00:15:00.113 6.347 - 6.373: 99.7365% ( 2) 00:15:00.113 6.400 - 6.427: 99.7473% ( 2) 00:15:00.113 6.427 - 6.453: 99.7634% ( 3) 00:15:00.113 6.453 - 6.480: 99.7688% ( 1) 00:15:00.113 6.480 - 6.507: 99.7795% ( 2) 00:15:00.113 6.507 - 6.533: 99.7903% ( 2) 00:15:00.113 6.560 - 6.587: 99.8011% ( 2) 00:15:00.113 6.587 - 6.613: 99.8064% ( 1) 00:15:00.113 6.667 - 6.693: 99.8118% ( 1) 00:15:00.113 6.693 - 6.720: 99.8172% ( 1) 00:15:00.113 6.747 - 6.773: 99.8226% ( 1) 00:15:00.113 6.773 - 6.800: 99.8279% ( 1) 00:15:00.113 6.880 - 6.933: 99.8333% ( 1) 00:15:00.113 6.933 - 6.987: 99.8387% ( 1) 00:15:00.113 7.040 - 7.093: 99.8441% ( 1) 00:15:00.113 7.200 - 7.253: 99.8602% ( 3) 00:15:00.113 7.307 - 7.360: 99.8710% ( 2) 00:15:00.113 7.733 - 7.787: 99.8763% ( 1) 00:15:00.113 7.787 - 7.840: 99.8817% ( 1) 00:15:00.113 8.160 - 8.213: 99.8871% ( 1) 00:15:00.113 13.547 - 13.600: 99.8925% ( 1) 00:15:00.113 3986.773 - 4014.080: 100.0000% ( 20) 00:15:00.113 00:15:00.113 Complete histogram 00:15:00.113 ================== 00:15:00.113 Range in us Cumulative Count 00:15:00.113 2.373 - 2.387: 0.0054% ( 1) 00:15:00.113 2.387 - 2.400: 0.0161% ( 2) 00:15:00.113 2.400 - 2.413: 0.7743% ( 141) 00:15:00.113 2.413 - 2.427: 0.9302% ( 29) 00:15:00.113 2.427 - 2.440: 1.1614% ( 43) 00:15:00.113 2.440 - 2.453: 1.2421% ( 15) 00:15:00.113 2.453 - 2.467: 38.5041% ( 6930) 00:15:00.113 2.467 - 2.480: 52.4357% ( 2591) 00:15:00.113 2.480 - 2.493: 69.1311% ( 3105) 00:15:00.113 2.493 - 2.507: 76.7824% ( 1423) 00:15:00.113 2.507 - [2024-12-06 11:14:06.116849] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.113 2.520: 80.9980% ( 784) 00:15:00.113 2.520 - 2.533: 83.9391% ( 547) 00:15:00.113 2.533 - 2.547: 88.9128% ( 925) 00:15:00.113 2.547 - 2.560: 92.8218% ( 727) 00:15:00.113 2.560 - 2.573: 96.3652% ( 659) 00:15:00.113 2.573 - 2.587: 98.3116% ( 362) 00:15:00.113 2.587 - 2.600: 99.1021% ( 147) 00:15:00.113 2.600 - 2.613: 99.2741% ( 32) 00:15:00.113 2.613 - 2.627: 99.2956% ( 4) 00:15:00.114 2.640 - 2.653: 99.3010% ( 1) 00:15:00.114 2.653 - 2.667: 99.3118% ( 2) 00:15:00.114 2.720 - 2.733: 99.3171% ( 1) 00:15:00.114 4.240 - 4.267: 99.3279% ( 2) 00:15:00.114 4.267 - 4.293: 99.3333% ( 1) 00:15:00.114 4.293 - 4.320: 99.3386% ( 1) 00:15:00.114 4.320 - 4.347: 99.3494% ( 2) 00:15:00.114 4.400 - 4.427: 99.3601% ( 2) 00:15:00.114 4.427 - 4.453: 99.3817% ( 4) 00:15:00.114 4.480 - 4.507: 99.3870% ( 1) 00:15:00.114 4.507 - 4.533: 99.3924% ( 1) 00:15:00.114 4.587 - 4.613: 99.3978% ( 1) 00:15:00.114 4.613 - 4.640: 99.4085% ( 2) 00:15:00.114 4.667 - 4.693: 99.4139% ( 1) 00:15:00.114 4.693 - 4.720: 99.4247% ( 2) 00:15:00.114 4.747 - 4.773: 99.4300% ( 1) 00:15:00.114 4.773 - 4.800: 99.4354% ( 1) 00:15:00.114 4.800 - 4.827: 99.4462% ( 2) 00:15:00.114 4.853 - 4.880: 99.4516% ( 1) 00:15:00.114 4.880 - 4.907: 99.4569% ( 1) 00:15:00.114 4.907 - 4.933: 99.4677% ( 2) 00:15:00.114 4.933 - 4.960: 99.4731% ( 1) 00:15:00.114 5.067 - 5.093: 99.4784% ( 1) 00:15:00.114 5.093 - 5.120: 99.4946% ( 3) 00:15:00.114 5.120 - 5.147: 99.4999% ( 1) 00:15:00.114 5.173 - 5.200: 99.5107% ( 2) 00:15:00.114 5.227 - 5.253: 99.5161% ( 1) 00:15:00.114 5.307 - 5.333: 99.5215% ( 1) 00:15:00.114 5.333 - 5.360: 99.5268% ( 1) 00:15:00.114 5.360 - 5.387: 99.5322% ( 1) 00:15:00.114 5.440 - 5.467: 99.5376% ( 1) 00:15:00.114 5.893 - 5.920: 99.5430% ( 1) 00:15:00.114 5.920 - 5.947: 99.5483% ( 1) 00:15:00.114 6.133 - 6.160: 99.5537% ( 1) 00:15:00.114 6.373 - 6.400: 99.5591% ( 1) 00:15:00.114 6.933 - 6.987: 99.5645% ( 1) 00:15:00.114 7.147 - 7.200: 99.5698% ( 1) 00:15:00.114 10.347 - 10.400: 99.5752% ( 1) 00:15:00.114 10.453 - 10.507: 99.5806% ( 1) 00:15:00.114 10.667 - 10.720: 99.5860% ( 1) 00:15:00.114 44.373 - 44.587: 99.5914% ( 1) 00:15:00.114 3031.040 - 3044.693: 99.5967% ( 1) 00:15:00.114 3986.773 - 4014.080: 99.9946% ( 74) 00:15:00.114 4969.813 - 4997.120: 100.0000% ( 1) 00:15:00.114 00:15:00.114 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:00.114 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:00.114 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:00.114 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:00.114 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:00.374 [ 00:15:00.374 { 00:15:00.374 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:00.374 "subtype": "Discovery", 00:15:00.374 "listen_addresses": [], 00:15:00.374 "allow_any_host": true, 00:15:00.374 "hosts": [] 00:15:00.374 }, 00:15:00.374 { 00:15:00.374 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:00.374 "subtype": "NVMe", 00:15:00.374 "listen_addresses": [ 00:15:00.374 { 00:15:00.374 "trtype": "VFIOUSER", 00:15:00.374 "adrfam": "IPv4", 00:15:00.374 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:00.374 "trsvcid": "0" 00:15:00.374 } 00:15:00.374 ], 00:15:00.374 "allow_any_host": true, 00:15:00.374 "hosts": [], 00:15:00.374 "serial_number": "SPDK1", 00:15:00.374 "model_number": "SPDK bdev Controller", 00:15:00.374 "max_namespaces": 32, 00:15:00.374 "min_cntlid": 1, 00:15:00.374 "max_cntlid": 65519, 00:15:00.374 "namespaces": [ 00:15:00.374 { 00:15:00.374 "nsid": 1, 00:15:00.374 "bdev_name": "Malloc1", 00:15:00.374 "name": "Malloc1", 00:15:00.374 "nguid": "03227D817B7D442F8E46B6097F0AFF1D", 00:15:00.374 "uuid": "03227d81-7b7d-442f-8e46-b6097f0aff1d" 00:15:00.374 } 00:15:00.374 ] 00:15:00.374 }, 00:15:00.374 { 00:15:00.374 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:00.374 "subtype": "NVMe", 00:15:00.374 "listen_addresses": [ 00:15:00.374 { 00:15:00.374 "trtype": "VFIOUSER", 00:15:00.374 "adrfam": "IPv4", 00:15:00.374 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:00.374 "trsvcid": "0" 00:15:00.374 } 00:15:00.374 ], 00:15:00.374 "allow_any_host": true, 00:15:00.374 "hosts": [], 00:15:00.374 "serial_number": "SPDK2", 00:15:00.374 "model_number": "SPDK bdev Controller", 00:15:00.374 "max_namespaces": 32, 00:15:00.374 "min_cntlid": 1, 00:15:00.374 "max_cntlid": 65519, 00:15:00.374 "namespaces": [ 00:15:00.374 { 00:15:00.374 "nsid": 1, 00:15:00.374 "bdev_name": "Malloc2", 00:15:00.374 "name": "Malloc2", 00:15:00.374 "nguid": "6C08EB666553419B9387E97178CFA4F5", 00:15:00.374 "uuid": "6c08eb66-6553-419b-9387-e97178cfa4f5" 00:15:00.374 } 00:15:00.374 ] 00:15:00.374 } 00:15:00.374 ] 00:15:00.374 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:00.374 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3377615 00:15:00.374 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:00.374 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:00.374 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:00.374 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:00.374 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:00.374 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:00.374 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:00.374 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:00.374 Malloc3 00:15:00.374 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:00.634 [2024-12-06 11:14:06.553306] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.634 [2024-12-06 11:14:06.698313] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.634 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:00.634 Asynchronous Event Request test 00:15:00.634 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.634 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.634 Registering asynchronous event callbacks... 00:15:00.634 Starting namespace attribute notice tests for all controllers... 00:15:00.634 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:00.634 aer_cb - Changed Namespace 00:15:00.634 Cleaning up... 00:15:00.895 [ 00:15:00.895 { 00:15:00.895 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:00.895 "subtype": "Discovery", 00:15:00.895 "listen_addresses": [], 00:15:00.895 "allow_any_host": true, 00:15:00.895 "hosts": [] 00:15:00.895 }, 00:15:00.895 { 00:15:00.895 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:00.895 "subtype": "NVMe", 00:15:00.895 "listen_addresses": [ 00:15:00.895 { 00:15:00.895 "trtype": "VFIOUSER", 00:15:00.895 "adrfam": "IPv4", 00:15:00.895 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:00.895 "trsvcid": "0" 00:15:00.895 } 00:15:00.895 ], 00:15:00.895 "allow_any_host": true, 00:15:00.895 "hosts": [], 00:15:00.895 "serial_number": "SPDK1", 00:15:00.895 "model_number": "SPDK bdev Controller", 00:15:00.895 "max_namespaces": 32, 00:15:00.895 "min_cntlid": 1, 00:15:00.895 "max_cntlid": 65519, 00:15:00.895 "namespaces": [ 00:15:00.895 { 00:15:00.895 "nsid": 1, 00:15:00.895 "bdev_name": "Malloc1", 00:15:00.895 "name": "Malloc1", 00:15:00.895 "nguid": "03227D817B7D442F8E46B6097F0AFF1D", 00:15:00.895 "uuid": "03227d81-7b7d-442f-8e46-b6097f0aff1d" 00:15:00.895 }, 00:15:00.895 { 00:15:00.895 "nsid": 2, 00:15:00.895 "bdev_name": "Malloc3", 00:15:00.895 "name": "Malloc3", 00:15:00.895 "nguid": "5A72AAF3BED04254AC8F286EA6B1C0F3", 00:15:00.895 "uuid": "5a72aaf3-bed0-4254-ac8f-286ea6b1c0f3" 00:15:00.895 } 00:15:00.895 ] 00:15:00.895 }, 00:15:00.895 { 00:15:00.895 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:00.895 "subtype": "NVMe", 00:15:00.895 "listen_addresses": [ 00:15:00.895 { 00:15:00.895 "trtype": "VFIOUSER", 00:15:00.895 "adrfam": "IPv4", 00:15:00.895 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:00.895 "trsvcid": "0" 00:15:00.895 } 00:15:00.895 ], 00:15:00.895 "allow_any_host": true, 00:15:00.895 "hosts": [], 00:15:00.895 "serial_number": "SPDK2", 00:15:00.895 "model_number": "SPDK bdev Controller", 00:15:00.895 "max_namespaces": 32, 00:15:00.895 "min_cntlid": 1, 00:15:00.895 "max_cntlid": 65519, 00:15:00.895 "namespaces": [ 00:15:00.895 { 00:15:00.895 "nsid": 1, 00:15:00.895 "bdev_name": "Malloc2", 00:15:00.895 "name": "Malloc2", 00:15:00.895 "nguid": "6C08EB666553419B9387E97178CFA4F5", 00:15:00.895 "uuid": "6c08eb66-6553-419b-9387-e97178cfa4f5" 00:15:00.895 } 00:15:00.895 ] 00:15:00.895 } 00:15:00.895 ] 00:15:00.895 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3377615 00:15:00.895 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:00.895 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:00.895 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:00.895 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:00.895 [2024-12-06 11:14:06.938953] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:00.895 [2024-12-06 11:14:06.939017] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377805 ] 00:15:00.895 [2024-12-06 11:14:06.994913] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:00.895 [2024-12-06 11:14:06.997137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.895 [2024-12-06 11:14:06.997161] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f755155b000 00:15:00.895 [2024-12-06 11:14:06.998134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.895 [2024-12-06 11:14:06.999145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.895 [2024-12-06 11:14:07.000155] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.895 [2024-12-06 11:14:07.001162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.895 [2024-12-06 11:14:07.002168] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.895 [2024-12-06 11:14:07.003174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.895 [2024-12-06 11:14:07.004177] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.895 [2024-12-06 11:14:07.005185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.895 [2024-12-06 11:14:07.006195] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.895 [2024-12-06 11:14:07.006205] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7551550000 00:15:00.895 [2024-12-06 11:14:07.007531] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.895 [2024-12-06 11:14:07.026741] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:00.895 [2024-12-06 11:14:07.026770] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:00.895 [2024-12-06 11:14:07.028820] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:00.895 [2024-12-06 11:14:07.028869] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:00.896 [2024-12-06 11:14:07.028955] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:00.896 [2024-12-06 11:14:07.028968] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:00.896 [2024-12-06 11:14:07.028974] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:00.896 [2024-12-06 11:14:07.029830] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:00.896 [2024-12-06 11:14:07.029840] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:00.896 [2024-12-06 11:14:07.029848] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:00.896 [2024-12-06 11:14:07.030836] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:00.896 [2024-12-06 11:14:07.030846] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:00.896 [2024-12-06 11:14:07.030854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:00.896 [2024-12-06 11:14:07.031842] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:00.896 [2024-12-06 11:14:07.031852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:00.896 [2024-12-06 11:14:07.032848] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:00.896 [2024-12-06 11:14:07.032857] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:00.896 [2024-12-06 11:14:07.032867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:00.896 [2024-12-06 11:14:07.032874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:00.896 [2024-12-06 11:14:07.032982] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:00.896 [2024-12-06 11:14:07.032987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:00.896 [2024-12-06 11:14:07.032992] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:00.896 [2024-12-06 11:14:07.033859] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:00.896 [2024-12-06 11:14:07.034874] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:00.896 [2024-12-06 11:14:07.035881] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:00.896 [2024-12-06 11:14:07.036881] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:00.896 [2024-12-06 11:14:07.036924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:00.896 [2024-12-06 11:14:07.037892] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:00.896 [2024-12-06 11:14:07.037903] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:00.896 [2024-12-06 11:14:07.037909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:00.896 [2024-12-06 11:14:07.037930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:00.896 [2024-12-06 11:14:07.037938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:00.896 [2024-12-06 11:14:07.037954] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.896 [2024-12-06 11:14:07.037959] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.896 [2024-12-06 11:14:07.037963] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:00.896 [2024-12-06 11:14:07.037975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.896 [2024-12-06 11:14:07.046870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:00.896 [2024-12-06 11:14:07.046883] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:00.896 [2024-12-06 11:14:07.046890] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:00.896 [2024-12-06 11:14:07.046895] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:00.896 [2024-12-06 11:14:07.046900] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:00.896 [2024-12-06 11:14:07.046905] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:00.896 [2024-12-06 11:14:07.046909] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:00.896 [2024-12-06 11:14:07.046914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:00.896 [2024-12-06 11:14:07.046923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:00.896 [2024-12-06 11:14:07.046933] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:00.896 [2024-12-06 11:14:07.054873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:00.896 [2024-12-06 11:14:07.054887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.896 [2024-12-06 11:14:07.054896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.896 [2024-12-06 11:14:07.054904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.896 [2024-12-06 11:14:07.054913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.896 [2024-12-06 11:14:07.054918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:00.896 [2024-12-06 11:14:07.054927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:00.896 [2024-12-06 11:14:07.054939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:01.157 [2024-12-06 11:14:07.062868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:01.157 [2024-12-06 11:14:07.062877] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:01.157 [2024-12-06 11:14:07.062882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.062889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.062895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.062904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.157 [2024-12-06 11:14:07.070867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:01.157 [2024-12-06 11:14:07.070935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.070944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.070951] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:01.157 [2024-12-06 11:14:07.070956] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:01.157 [2024-12-06 11:14:07.070960] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.157 [2024-12-06 11:14:07.070966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:01.157 [2024-12-06 11:14:07.078867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:01.157 [2024-12-06 11:14:07.078879] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:01.157 [2024-12-06 11:14:07.078892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.078900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.078907] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.157 [2024-12-06 11:14:07.078912] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.157 [2024-12-06 11:14:07.078915] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.157 [2024-12-06 11:14:07.078921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.157 [2024-12-06 11:14:07.086869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:01.157 [2024-12-06 11:14:07.086883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.086892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.086899] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.157 [2024-12-06 11:14:07.086907] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.157 [2024-12-06 11:14:07.086910] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.157 [2024-12-06 11:14:07.086916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.157 [2024-12-06 11:14:07.094867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:01.157 [2024-12-06 11:14:07.094877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.094884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.094892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.094900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.094905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.094910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.094915] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:01.157 [2024-12-06 11:14:07.094920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:01.157 [2024-12-06 11:14:07.094926] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:01.157 [2024-12-06 11:14:07.094943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:01.157 [2024-12-06 11:14:07.102869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:01.157 [2024-12-06 11:14:07.102883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:01.157 [2024-12-06 11:14:07.110868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:01.157 [2024-12-06 11:14:07.110882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:01.157 [2024-12-06 11:14:07.118867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:01.157 [2024-12-06 11:14:07.118880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.157 [2024-12-06 11:14:07.126867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:01.157 [2024-12-06 11:14:07.126883] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:01.157 [2024-12-06 11:14:07.126888] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:01.158 [2024-12-06 11:14:07.126892] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:01.158 [2024-12-06 11:14:07.126896] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:01.158 [2024-12-06 11:14:07.126899] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:01.158 [2024-12-06 11:14:07.126907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:01.158 [2024-12-06 11:14:07.126916] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:01.158 [2024-12-06 11:14:07.126920] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:01.158 [2024-12-06 11:14:07.126924] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.158 [2024-12-06 11:14:07.126930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:01.158 [2024-12-06 11:14:07.126937] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:01.158 [2024-12-06 11:14:07.126941] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.158 [2024-12-06 11:14:07.126945] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.158 [2024-12-06 11:14:07.126951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.158 [2024-12-06 11:14:07.126959] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:01.158 [2024-12-06 11:14:07.126963] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:01.158 [2024-12-06 11:14:07.126966] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.158 [2024-12-06 11:14:07.126972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:01.158 [2024-12-06 11:14:07.134867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:01.158 [2024-12-06 11:14:07.134882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:01.158 [2024-12-06 11:14:07.134893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:01.158 [2024-12-06 11:14:07.134900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:01.158 ===================================================== 00:15:01.158 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:01.158 ===================================================== 00:15:01.158 Controller Capabilities/Features 00:15:01.158 ================================ 00:15:01.158 Vendor ID: 4e58 00:15:01.158 Subsystem Vendor ID: 4e58 00:15:01.158 Serial Number: SPDK2 00:15:01.158 Model Number: SPDK bdev Controller 00:15:01.158 Firmware Version: 25.01 00:15:01.158 Recommended Arb Burst: 6 00:15:01.158 IEEE OUI Identifier: 8d 6b 50 00:15:01.158 Multi-path I/O 00:15:01.158 May have multiple subsystem ports: Yes 00:15:01.158 May have multiple controllers: Yes 00:15:01.158 Associated with SR-IOV VF: No 00:15:01.158 Max Data Transfer Size: 131072 00:15:01.158 Max Number of Namespaces: 32 00:15:01.158 Max Number of I/O Queues: 127 00:15:01.158 NVMe Specification Version (VS): 1.3 00:15:01.158 NVMe Specification Version (Identify): 1.3 00:15:01.158 Maximum Queue Entries: 256 00:15:01.158 Contiguous Queues Required: Yes 00:15:01.158 Arbitration Mechanisms Supported 00:15:01.158 Weighted Round Robin: Not Supported 00:15:01.158 Vendor Specific: Not Supported 00:15:01.158 Reset Timeout: 15000 ms 00:15:01.158 Doorbell Stride: 4 bytes 00:15:01.158 NVM Subsystem Reset: Not Supported 00:15:01.158 Command Sets Supported 00:15:01.158 NVM Command Set: Supported 00:15:01.158 Boot Partition: Not Supported 00:15:01.158 Memory Page Size Minimum: 4096 bytes 00:15:01.158 Memory Page Size Maximum: 4096 bytes 00:15:01.158 Persistent Memory Region: Not Supported 00:15:01.158 Optional Asynchronous Events Supported 00:15:01.158 Namespace Attribute Notices: Supported 00:15:01.158 Firmware Activation Notices: Not Supported 00:15:01.158 ANA Change Notices: Not Supported 00:15:01.158 PLE Aggregate Log Change Notices: Not Supported 00:15:01.158 LBA Status Info Alert Notices: Not Supported 00:15:01.158 EGE Aggregate Log Change Notices: Not Supported 00:15:01.158 Normal NVM Subsystem Shutdown event: Not Supported 00:15:01.158 Zone Descriptor Change Notices: Not Supported 00:15:01.158 Discovery Log Change Notices: Not Supported 00:15:01.158 Controller Attributes 00:15:01.158 128-bit Host Identifier: Supported 00:15:01.158 Non-Operational Permissive Mode: Not Supported 00:15:01.158 NVM Sets: Not Supported 00:15:01.158 Read Recovery Levels: Not Supported 00:15:01.158 Endurance Groups: Not Supported 00:15:01.158 Predictable Latency Mode: Not Supported 00:15:01.158 Traffic Based Keep ALive: Not Supported 00:15:01.158 Namespace Granularity: Not Supported 00:15:01.158 SQ Associations: Not Supported 00:15:01.158 UUID List: Not Supported 00:15:01.158 Multi-Domain Subsystem: Not Supported 00:15:01.158 Fixed Capacity Management: Not Supported 00:15:01.158 Variable Capacity Management: Not Supported 00:15:01.158 Delete Endurance Group: Not Supported 00:15:01.158 Delete NVM Set: Not Supported 00:15:01.158 Extended LBA Formats Supported: Not Supported 00:15:01.158 Flexible Data Placement Supported: Not Supported 00:15:01.158 00:15:01.158 Controller Memory Buffer Support 00:15:01.158 ================================ 00:15:01.158 Supported: No 00:15:01.158 00:15:01.158 Persistent Memory Region Support 00:15:01.158 ================================ 00:15:01.158 Supported: No 00:15:01.158 00:15:01.158 Admin Command Set Attributes 00:15:01.158 ============================ 00:15:01.158 Security Send/Receive: Not Supported 00:15:01.158 Format NVM: Not Supported 00:15:01.158 Firmware Activate/Download: Not Supported 00:15:01.158 Namespace Management: Not Supported 00:15:01.158 Device Self-Test: Not Supported 00:15:01.158 Directives: Not Supported 00:15:01.158 NVMe-MI: Not Supported 00:15:01.158 Virtualization Management: Not Supported 00:15:01.158 Doorbell Buffer Config: Not Supported 00:15:01.158 Get LBA Status Capability: Not Supported 00:15:01.158 Command & Feature Lockdown Capability: Not Supported 00:15:01.158 Abort Command Limit: 4 00:15:01.158 Async Event Request Limit: 4 00:15:01.158 Number of Firmware Slots: N/A 00:15:01.158 Firmware Slot 1 Read-Only: N/A 00:15:01.158 Firmware Activation Without Reset: N/A 00:15:01.158 Multiple Update Detection Support: N/A 00:15:01.158 Firmware Update Granularity: No Information Provided 00:15:01.158 Per-Namespace SMART Log: No 00:15:01.158 Asymmetric Namespace Access Log Page: Not Supported 00:15:01.158 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:01.158 Command Effects Log Page: Supported 00:15:01.158 Get Log Page Extended Data: Supported 00:15:01.158 Telemetry Log Pages: Not Supported 00:15:01.158 Persistent Event Log Pages: Not Supported 00:15:01.158 Supported Log Pages Log Page: May Support 00:15:01.158 Commands Supported & Effects Log Page: Not Supported 00:15:01.158 Feature Identifiers & Effects Log Page:May Support 00:15:01.158 NVMe-MI Commands & Effects Log Page: May Support 00:15:01.158 Data Area 4 for Telemetry Log: Not Supported 00:15:01.158 Error Log Page Entries Supported: 128 00:15:01.158 Keep Alive: Supported 00:15:01.158 Keep Alive Granularity: 10000 ms 00:15:01.158 00:15:01.158 NVM Command Set Attributes 00:15:01.158 ========================== 00:15:01.158 Submission Queue Entry Size 00:15:01.158 Max: 64 00:15:01.158 Min: 64 00:15:01.158 Completion Queue Entry Size 00:15:01.158 Max: 16 00:15:01.158 Min: 16 00:15:01.158 Number of Namespaces: 32 00:15:01.158 Compare Command: Supported 00:15:01.158 Write Uncorrectable Command: Not Supported 00:15:01.158 Dataset Management Command: Supported 00:15:01.158 Write Zeroes Command: Supported 00:15:01.158 Set Features Save Field: Not Supported 00:15:01.158 Reservations: Not Supported 00:15:01.158 Timestamp: Not Supported 00:15:01.158 Copy: Supported 00:15:01.158 Volatile Write Cache: Present 00:15:01.158 Atomic Write Unit (Normal): 1 00:15:01.158 Atomic Write Unit (PFail): 1 00:15:01.158 Atomic Compare & Write Unit: 1 00:15:01.158 Fused Compare & Write: Supported 00:15:01.158 Scatter-Gather List 00:15:01.158 SGL Command Set: Supported (Dword aligned) 00:15:01.158 SGL Keyed: Not Supported 00:15:01.158 SGL Bit Bucket Descriptor: Not Supported 00:15:01.158 SGL Metadata Pointer: Not Supported 00:15:01.158 Oversized SGL: Not Supported 00:15:01.158 SGL Metadata Address: Not Supported 00:15:01.158 SGL Offset: Not Supported 00:15:01.158 Transport SGL Data Block: Not Supported 00:15:01.158 Replay Protected Memory Block: Not Supported 00:15:01.158 00:15:01.158 Firmware Slot Information 00:15:01.158 ========================= 00:15:01.158 Active slot: 1 00:15:01.158 Slot 1 Firmware Revision: 25.01 00:15:01.158 00:15:01.158 00:15:01.158 Commands Supported and Effects 00:15:01.158 ============================== 00:15:01.158 Admin Commands 00:15:01.158 -------------- 00:15:01.159 Get Log Page (02h): Supported 00:15:01.159 Identify (06h): Supported 00:15:01.159 Abort (08h): Supported 00:15:01.159 Set Features (09h): Supported 00:15:01.159 Get Features (0Ah): Supported 00:15:01.159 Asynchronous Event Request (0Ch): Supported 00:15:01.159 Keep Alive (18h): Supported 00:15:01.159 I/O Commands 00:15:01.159 ------------ 00:15:01.159 Flush (00h): Supported LBA-Change 00:15:01.159 Write (01h): Supported LBA-Change 00:15:01.159 Read (02h): Supported 00:15:01.159 Compare (05h): Supported 00:15:01.159 Write Zeroes (08h): Supported LBA-Change 00:15:01.159 Dataset Management (09h): Supported LBA-Change 00:15:01.159 Copy (19h): Supported LBA-Change 00:15:01.159 00:15:01.159 Error Log 00:15:01.159 ========= 00:15:01.159 00:15:01.159 Arbitration 00:15:01.159 =========== 00:15:01.159 Arbitration Burst: 1 00:15:01.159 00:15:01.159 Power Management 00:15:01.159 ================ 00:15:01.159 Number of Power States: 1 00:15:01.159 Current Power State: Power State #0 00:15:01.159 Power State #0: 00:15:01.159 Max Power: 0.00 W 00:15:01.159 Non-Operational State: Operational 00:15:01.159 Entry Latency: Not Reported 00:15:01.159 Exit Latency: Not Reported 00:15:01.159 Relative Read Throughput: 0 00:15:01.159 Relative Read Latency: 0 00:15:01.159 Relative Write Throughput: 0 00:15:01.159 Relative Write Latency: 0 00:15:01.159 Idle Power: Not Reported 00:15:01.159 Active Power: Not Reported 00:15:01.159 Non-Operational Permissive Mode: Not Supported 00:15:01.159 00:15:01.159 Health Information 00:15:01.159 ================== 00:15:01.159 Critical Warnings: 00:15:01.159 Available Spare Space: OK 00:15:01.159 Temperature: OK 00:15:01.159 Device Reliability: OK 00:15:01.159 Read Only: No 00:15:01.159 Volatile Memory Backup: OK 00:15:01.159 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:01.159 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:01.159 Available Spare: 0% 00:15:01.159 Available Sp[2024-12-06 11:14:07.135003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:01.159 [2024-12-06 11:14:07.142866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:01.159 [2024-12-06 11:14:07.142899] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:01.159 [2024-12-06 11:14:07.142909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.159 [2024-12-06 11:14:07.142916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.159 [2024-12-06 11:14:07.142922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.159 [2024-12-06 11:14:07.142929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.159 [2024-12-06 11:14:07.142968] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:01.159 [2024-12-06 11:14:07.142979] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:01.159 [2024-12-06 11:14:07.143973] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.159 [2024-12-06 11:14:07.144024] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:01.159 [2024-12-06 11:14:07.144034] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:01.159 [2024-12-06 11:14:07.144975] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:01.159 [2024-12-06 11:14:07.144987] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:01.159 [2024-12-06 11:14:07.145037] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:01.159 [2024-12-06 11:14:07.147869] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:01.159 are Threshold: 0% 00:15:01.159 Life Percentage Used: 0% 00:15:01.159 Data Units Read: 0 00:15:01.159 Data Units Written: 0 00:15:01.159 Host Read Commands: 0 00:15:01.159 Host Write Commands: 0 00:15:01.159 Controller Busy Time: 0 minutes 00:15:01.159 Power Cycles: 0 00:15:01.159 Power On Hours: 0 hours 00:15:01.159 Unsafe Shutdowns: 0 00:15:01.159 Unrecoverable Media Errors: 0 00:15:01.159 Lifetime Error Log Entries: 0 00:15:01.159 Warning Temperature Time: 0 minutes 00:15:01.159 Critical Temperature Time: 0 minutes 00:15:01.159 00:15:01.159 Number of Queues 00:15:01.159 ================ 00:15:01.159 Number of I/O Submission Queues: 127 00:15:01.159 Number of I/O Completion Queues: 127 00:15:01.159 00:15:01.159 Active Namespaces 00:15:01.159 ================= 00:15:01.159 Namespace ID:1 00:15:01.159 Error Recovery Timeout: Unlimited 00:15:01.159 Command Set Identifier: NVM (00h) 00:15:01.159 Deallocate: Supported 00:15:01.159 Deallocated/Unwritten Error: Not Supported 00:15:01.159 Deallocated Read Value: Unknown 00:15:01.159 Deallocate in Write Zeroes: Not Supported 00:15:01.159 Deallocated Guard Field: 0xFFFF 00:15:01.159 Flush: Supported 00:15:01.159 Reservation: Supported 00:15:01.159 Namespace Sharing Capabilities: Multiple Controllers 00:15:01.159 Size (in LBAs): 131072 (0GiB) 00:15:01.159 Capacity (in LBAs): 131072 (0GiB) 00:15:01.159 Utilization (in LBAs): 131072 (0GiB) 00:15:01.159 NGUID: 6C08EB666553419B9387E97178CFA4F5 00:15:01.159 UUID: 6c08eb66-6553-419b-9387-e97178cfa4f5 00:15:01.159 Thin Provisioning: Not Supported 00:15:01.159 Per-NS Atomic Units: Yes 00:15:01.159 Atomic Boundary Size (Normal): 0 00:15:01.159 Atomic Boundary Size (PFail): 0 00:15:01.159 Atomic Boundary Offset: 0 00:15:01.159 Maximum Single Source Range Length: 65535 00:15:01.159 Maximum Copy Length: 65535 00:15:01.159 Maximum Source Range Count: 1 00:15:01.159 NGUID/EUI64 Never Reused: No 00:15:01.159 Namespace Write Protected: No 00:15:01.159 Number of LBA Formats: 1 00:15:01.159 Current LBA Format: LBA Format #00 00:15:01.159 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:01.159 00:15:01.159 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:01.418 [2024-12-06 11:14:07.343956] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.704 Initializing NVMe Controllers 00:15:06.704 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:06.704 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:06.704 Initialization complete. Launching workers. 00:15:06.704 ======================================================== 00:15:06.704 Latency(us) 00:15:06.704 Device Information : IOPS MiB/s Average min max 00:15:06.704 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40002.80 156.26 3199.88 869.06 9754.29 00:15:06.704 ======================================================== 00:15:06.704 Total : 40002.80 156.26 3199.88 869.06 9754.29 00:15:06.704 00:15:06.704 [2024-12-06 11:14:12.447062] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.704 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:06.704 [2024-12-06 11:14:12.638682] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.984 Initializing NVMe Controllers 00:15:11.984 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:11.984 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:11.984 Initialization complete. Launching workers. 00:15:11.984 ======================================================== 00:15:11.984 Latency(us) 00:15:11.984 Device Information : IOPS MiB/s Average min max 00:15:11.984 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33928.32 132.53 3772.28 1130.40 8432.43 00:15:11.984 ======================================================== 00:15:11.984 Total : 33928.32 132.53 3772.28 1130.40 8432.43 00:15:11.984 00:15:11.984 [2024-12-06 11:14:17.659850] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.984 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:11.984 [2024-12-06 11:14:17.869061] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.262 [2024-12-06 11:14:23.012938] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.262 Initializing NVMe Controllers 00:15:17.262 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.262 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.262 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:17.262 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:17.262 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:17.262 Initialization complete. Launching workers. 00:15:17.262 Starting thread on core 2 00:15:17.262 Starting thread on core 3 00:15:17.262 Starting thread on core 1 00:15:17.262 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:17.262 [2024-12-06 11:14:23.303108] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:20.558 [2024-12-06 11:14:26.359303] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:20.558 Initializing NVMe Controllers 00:15:20.558 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:20.558 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:20.558 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:20.558 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:20.558 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:20.558 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:20.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:20.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:20.558 Initialization complete. Launching workers. 00:15:20.558 Starting thread on core 1 with urgent priority queue 00:15:20.558 Starting thread on core 2 with urgent priority queue 00:15:20.558 Starting thread on core 3 with urgent priority queue 00:15:20.558 Starting thread on core 0 with urgent priority queue 00:15:20.558 SPDK bdev Controller (SPDK2 ) core 0: 8086.67 IO/s 12.37 secs/100000 ios 00:15:20.558 SPDK bdev Controller (SPDK2 ) core 1: 10331.33 IO/s 9.68 secs/100000 ios 00:15:20.558 SPDK bdev Controller (SPDK2 ) core 2: 8065.67 IO/s 12.40 secs/100000 ios 00:15:20.558 SPDK bdev Controller (SPDK2 ) core 3: 9969.00 IO/s 10.03 secs/100000 ios 00:15:20.558 ======================================================== 00:15:20.558 00:15:20.558 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:20.558 [2024-12-06 11:14:26.662305] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:20.558 Initializing NVMe Controllers 00:15:20.558 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:20.558 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:20.558 Namespace ID: 1 size: 0GB 00:15:20.558 Initialization complete. 00:15:20.558 INFO: using host memory buffer for IO 00:15:20.558 Hello world! 00:15:20.558 [2024-12-06 11:14:26.672355] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:20.818 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:20.818 [2024-12-06 11:14:26.968868] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.236 Initializing NVMe Controllers 00:15:22.236 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.236 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.236 Initialization complete. Launching workers. 00:15:22.236 submit (in ns) avg, min, max = 8983.9, 3906.7, 3999928.3 00:15:22.236 complete (in ns) avg, min, max = 15035.8, 2373.3, 4995027.5 00:15:22.236 00:15:22.236 Submit histogram 00:15:22.236 ================ 00:15:22.236 Range in us Cumulative Count 00:15:22.236 3.893 - 3.920: 0.2937% ( 55) 00:15:22.236 3.920 - 3.947: 2.7179% ( 454) 00:15:22.236 3.947 - 3.973: 8.2337% ( 1033) 00:15:22.236 3.973 - 4.000: 17.5406% ( 1743) 00:15:22.236 4.000 - 4.027: 28.7003% ( 2090) 00:15:22.236 4.027 - 4.053: 40.1484% ( 2144) 00:15:22.236 4.053 - 4.080: 55.2648% ( 2831) 00:15:22.236 4.080 - 4.107: 72.1540% ( 3163) 00:15:22.236 4.107 - 4.133: 84.8889% ( 2385) 00:15:22.236 4.133 - 4.160: 92.8823% ( 1497) 00:15:22.236 4.160 - 4.187: 97.0953% ( 789) 00:15:22.236 4.187 - 4.213: 98.6598% ( 293) 00:15:22.236 4.213 - 4.240: 99.1991% ( 101) 00:15:22.236 4.240 - 4.267: 99.3806% ( 34) 00:15:22.236 4.267 - 4.293: 99.4393% ( 11) 00:15:22.236 4.293 - 4.320: 99.4447% ( 1) 00:15:22.236 4.560 - 4.587: 99.4500% ( 1) 00:15:22.236 4.773 - 4.800: 99.4554% ( 1) 00:15:22.236 4.987 - 5.013: 99.4607% ( 1) 00:15:22.236 5.520 - 5.547: 99.4660% ( 1) 00:15:22.236 5.733 - 5.760: 99.4714% ( 1) 00:15:22.236 5.973 - 6.000: 99.4767% ( 1) 00:15:22.236 6.027 - 6.053: 99.4821% ( 1) 00:15:22.236 6.053 - 6.080: 99.4874% ( 1) 00:15:22.236 6.080 - 6.107: 99.4927% ( 1) 00:15:22.236 6.133 - 6.160: 99.5034% ( 2) 00:15:22.236 6.160 - 6.187: 99.5088% ( 1) 00:15:22.236 6.187 - 6.213: 99.5141% ( 1) 00:15:22.236 6.213 - 6.240: 99.5248% ( 2) 00:15:22.236 6.240 - 6.267: 99.5301% ( 1) 00:15:22.236 6.267 - 6.293: 99.5355% ( 1) 00:15:22.236 6.347 - 6.373: 99.5408% ( 1) 00:15:22.236 6.373 - 6.400: 99.5461% ( 1) 00:15:22.236 6.400 - 6.427: 99.5515% ( 1) 00:15:22.236 6.453 - 6.480: 99.5728% ( 4) 00:15:22.236 6.480 - 6.507: 99.5835% ( 2) 00:15:22.236 6.507 - 6.533: 99.5942% ( 2) 00:15:22.236 6.533 - 6.560: 99.5995% ( 1) 00:15:22.236 6.560 - 6.587: 99.6049% ( 1) 00:15:22.236 6.587 - 6.613: 99.6102% ( 1) 00:15:22.236 6.613 - 6.640: 99.6209% ( 2) 00:15:22.236 6.640 - 6.667: 99.6262% ( 1) 00:15:22.236 6.693 - 6.720: 99.6316% ( 1) 00:15:22.236 6.720 - 6.747: 99.6476% ( 3) 00:15:22.236 6.747 - 6.773: 99.6583% ( 2) 00:15:22.236 6.773 - 6.800: 99.6636% ( 1) 00:15:22.236 6.827 - 6.880: 99.6796% ( 3) 00:15:22.236 6.880 - 6.933: 99.6903% ( 2) 00:15:22.236 6.933 - 6.987: 99.7010% ( 2) 00:15:22.236 7.040 - 7.093: 99.7277% ( 5) 00:15:22.236 7.093 - 7.147: 99.7384% ( 2) 00:15:22.236 7.147 - 7.200: 99.7490% ( 2) 00:15:22.236 7.200 - 7.253: 99.7597% ( 2) 00:15:22.236 7.253 - 7.307: 99.7651% ( 1) 00:15:22.236 7.360 - 7.413: 99.7757% ( 2) 00:15:22.236 7.413 - 7.467: 99.7864% ( 2) 00:15:22.236 7.467 - 7.520: 99.7971% ( 2) 00:15:22.236 7.520 - 7.573: 99.8024% ( 1) 00:15:22.236 7.573 - 7.627: 99.8078% ( 1) 00:15:22.236 7.733 - 7.787: 99.8185% ( 2) 00:15:22.237 7.787 - 7.840: 99.8238% ( 1) 00:15:22.237 7.893 - 7.947: 99.8398% ( 3) 00:15:22.237 7.947 - 8.000: 99.8452% ( 1) 00:15:22.237 8.107 - 8.160: 99.8558% ( 2) 00:15:22.237 8.160 - 8.213: 99.8612% ( 1) 00:15:22.237 8.587 - 8.640: 99.8665% ( 1) 00:15:22.237 8.853 - 8.907: 99.8718% ( 1) 00:15:22.237 12.747 - 12.800: 99.8772% ( 1) 00:15:22.237 3986.773 - 4014.080: 100.0000% ( 23) 00:15:22.237 00:15:22.237 Complete histogram 00:15:22.237 ================== 00:15:22.237 Range in us Cumulative Count 00:15:22.237 2.373 - 2.387: 0.0053% ( 1) 00:15:22.237 2.387 - 2.400: 0.2456% ( 45) 00:15:22.237 2.400 - 2.413: 1.1160% ( 163) 00:15:22.237 2.413 - 2.427: 1.2868% ( 32) 00:15:22.237 2.427 - 2.440: 1.4737% ( 35) 00:15:22.237 2.440 - 2.453: 1.5164% ( 8) 00:15:22.237 2.453 - 2.467: 33.2657% ( 5946) 00:15:22.237 2.467 - 2.480: 53.0062% ( 3697) 00:15:22.237 2.480 - [2024-12-06 11:14:28.066534] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.237 2.493: 67.6581% ( 2744) 00:15:22.237 2.493 - 2.507: 76.9703% ( 1744) 00:15:22.237 2.507 - 2.520: 81.2420% ( 800) 00:15:22.237 2.520 - 2.533: 83.2230% ( 371) 00:15:22.237 2.533 - 2.547: 88.2956% ( 950) 00:15:22.237 2.547 - 2.560: 93.1493% ( 909) 00:15:22.237 2.560 - 2.573: 96.1982% ( 571) 00:15:22.237 2.573 - 2.587: 98.3127% ( 396) 00:15:22.237 2.587 - 2.600: 99.1724% ( 161) 00:15:22.237 2.600 - 2.613: 99.4126% ( 45) 00:15:22.237 2.613 - 2.627: 99.4660% ( 10) 00:15:22.237 2.627 - 2.640: 99.4821% ( 3) 00:15:22.237 2.640 - 2.653: 99.4927% ( 2) 00:15:22.237 4.373 - 4.400: 99.4981% ( 1) 00:15:22.237 4.400 - 4.427: 99.5034% ( 1) 00:15:22.237 4.827 - 4.853: 99.5088% ( 1) 00:15:22.237 5.040 - 5.067: 99.5141% ( 1) 00:15:22.237 5.200 - 5.227: 99.5194% ( 1) 00:15:22.237 5.227 - 5.253: 99.5248% ( 1) 00:15:22.237 5.360 - 5.387: 99.5355% ( 2) 00:15:22.237 5.387 - 5.413: 99.5461% ( 2) 00:15:22.237 5.413 - 5.440: 99.5515% ( 1) 00:15:22.237 5.467 - 5.493: 99.5675% ( 3) 00:15:22.237 5.493 - 5.520: 99.5782% ( 2) 00:15:22.237 5.520 - 5.547: 99.5889% ( 2) 00:15:22.237 5.627 - 5.653: 99.5942% ( 1) 00:15:22.237 5.653 - 5.680: 99.6049% ( 2) 00:15:22.237 5.707 - 5.733: 99.6155% ( 2) 00:15:22.237 5.733 - 5.760: 99.6316% ( 3) 00:15:22.237 5.787 - 5.813: 99.6369% ( 1) 00:15:22.237 5.813 - 5.840: 99.6422% ( 1) 00:15:22.237 5.893 - 5.920: 99.6476% ( 1) 00:15:22.237 5.920 - 5.947: 99.6529% ( 1) 00:15:22.237 6.000 - 6.027: 99.6583% ( 1) 00:15:22.237 6.027 - 6.053: 99.6636% ( 1) 00:15:22.237 6.053 - 6.080: 99.6689% ( 1) 00:15:22.237 6.107 - 6.133: 99.6743% ( 1) 00:15:22.237 6.453 - 6.480: 99.6796% ( 1) 00:15:22.237 13.440 - 13.493: 99.6850% ( 1) 00:15:22.237 3031.040 - 3044.693: 99.6956% ( 2) 00:15:22.237 3986.773 - 4014.080: 99.9893% ( 55) 00:15:22.237 4041.387 - 4068.693: 99.9947% ( 1) 00:15:22.237 4969.813 - 4997.120: 100.0000% ( 1) 00:15:22.237 00:15:22.237 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:22.237 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:22.237 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:22.237 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:22.237 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:22.237 [ 00:15:22.237 { 00:15:22.237 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:22.237 "subtype": "Discovery", 00:15:22.237 "listen_addresses": [], 00:15:22.237 "allow_any_host": true, 00:15:22.237 "hosts": [] 00:15:22.237 }, 00:15:22.237 { 00:15:22.237 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:22.237 "subtype": "NVMe", 00:15:22.237 "listen_addresses": [ 00:15:22.237 { 00:15:22.237 "trtype": "VFIOUSER", 00:15:22.237 "adrfam": "IPv4", 00:15:22.237 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:22.237 "trsvcid": "0" 00:15:22.237 } 00:15:22.237 ], 00:15:22.237 "allow_any_host": true, 00:15:22.237 "hosts": [], 00:15:22.237 "serial_number": "SPDK1", 00:15:22.237 "model_number": "SPDK bdev Controller", 00:15:22.237 "max_namespaces": 32, 00:15:22.237 "min_cntlid": 1, 00:15:22.237 "max_cntlid": 65519, 00:15:22.237 "namespaces": [ 00:15:22.237 { 00:15:22.237 "nsid": 1, 00:15:22.237 "bdev_name": "Malloc1", 00:15:22.237 "name": "Malloc1", 00:15:22.237 "nguid": "03227D817B7D442F8E46B6097F0AFF1D", 00:15:22.237 "uuid": "03227d81-7b7d-442f-8e46-b6097f0aff1d" 00:15:22.237 }, 00:15:22.237 { 00:15:22.237 "nsid": 2, 00:15:22.237 "bdev_name": "Malloc3", 00:15:22.237 "name": "Malloc3", 00:15:22.237 "nguid": "5A72AAF3BED04254AC8F286EA6B1C0F3", 00:15:22.237 "uuid": "5a72aaf3-bed0-4254-ac8f-286ea6b1c0f3" 00:15:22.237 } 00:15:22.237 ] 00:15:22.237 }, 00:15:22.237 { 00:15:22.237 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:22.237 "subtype": "NVMe", 00:15:22.237 "listen_addresses": [ 00:15:22.237 { 00:15:22.237 "trtype": "VFIOUSER", 00:15:22.237 "adrfam": "IPv4", 00:15:22.237 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:22.237 "trsvcid": "0" 00:15:22.237 } 00:15:22.237 ], 00:15:22.237 "allow_any_host": true, 00:15:22.237 "hosts": [], 00:15:22.237 "serial_number": "SPDK2", 00:15:22.237 "model_number": "SPDK bdev Controller", 00:15:22.237 "max_namespaces": 32, 00:15:22.238 "min_cntlid": 1, 00:15:22.238 "max_cntlid": 65519, 00:15:22.238 "namespaces": [ 00:15:22.238 { 00:15:22.238 "nsid": 1, 00:15:22.238 "bdev_name": "Malloc2", 00:15:22.238 "name": "Malloc2", 00:15:22.238 "nguid": "6C08EB666553419B9387E97178CFA4F5", 00:15:22.238 "uuid": "6c08eb66-6553-419b-9387-e97178cfa4f5" 00:15:22.238 } 00:15:22.238 ] 00:15:22.238 } 00:15:22.238 ] 00:15:22.238 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:22.238 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:22.238 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3381839 00:15:22.238 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:22.238 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:22.238 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:22.238 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:22.238 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:22.238 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:22.238 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:22.497 Malloc4 00:15:22.497 [2024-12-06 11:14:28.493712] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.497 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:22.757 [2024-12-06 11:14:28.681013] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.757 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:22.757 Asynchronous Event Request test 00:15:22.757 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.757 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.757 Registering asynchronous event callbacks... 00:15:22.757 Starting namespace attribute notice tests for all controllers... 00:15:22.757 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:22.757 aer_cb - Changed Namespace 00:15:22.757 Cleaning up... 00:15:22.757 [ 00:15:22.757 { 00:15:22.757 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:22.757 "subtype": "Discovery", 00:15:22.757 "listen_addresses": [], 00:15:22.757 "allow_any_host": true, 00:15:22.757 "hosts": [] 00:15:22.757 }, 00:15:22.757 { 00:15:22.757 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:22.757 "subtype": "NVMe", 00:15:22.757 "listen_addresses": [ 00:15:22.757 { 00:15:22.757 "trtype": "VFIOUSER", 00:15:22.757 "adrfam": "IPv4", 00:15:22.757 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:22.757 "trsvcid": "0" 00:15:22.757 } 00:15:22.757 ], 00:15:22.757 "allow_any_host": true, 00:15:22.757 "hosts": [], 00:15:22.757 "serial_number": "SPDK1", 00:15:22.757 "model_number": "SPDK bdev Controller", 00:15:22.757 "max_namespaces": 32, 00:15:22.757 "min_cntlid": 1, 00:15:22.757 "max_cntlid": 65519, 00:15:22.757 "namespaces": [ 00:15:22.757 { 00:15:22.757 "nsid": 1, 00:15:22.757 "bdev_name": "Malloc1", 00:15:22.757 "name": "Malloc1", 00:15:22.757 "nguid": "03227D817B7D442F8E46B6097F0AFF1D", 00:15:22.757 "uuid": "03227d81-7b7d-442f-8e46-b6097f0aff1d" 00:15:22.757 }, 00:15:22.757 { 00:15:22.757 "nsid": 2, 00:15:22.757 "bdev_name": "Malloc3", 00:15:22.757 "name": "Malloc3", 00:15:22.757 "nguid": "5A72AAF3BED04254AC8F286EA6B1C0F3", 00:15:22.757 "uuid": "5a72aaf3-bed0-4254-ac8f-286ea6b1c0f3" 00:15:22.757 } 00:15:22.757 ] 00:15:22.757 }, 00:15:22.757 { 00:15:22.757 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:22.757 "subtype": "NVMe", 00:15:22.757 "listen_addresses": [ 00:15:22.757 { 00:15:22.757 "trtype": "VFIOUSER", 00:15:22.757 "adrfam": "IPv4", 00:15:22.757 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:22.757 "trsvcid": "0" 00:15:22.757 } 00:15:22.757 ], 00:15:22.757 "allow_any_host": true, 00:15:22.757 "hosts": [], 00:15:22.757 "serial_number": "SPDK2", 00:15:22.757 "model_number": "SPDK bdev Controller", 00:15:22.757 "max_namespaces": 32, 00:15:22.757 "min_cntlid": 1, 00:15:22.757 "max_cntlid": 65519, 00:15:22.757 "namespaces": [ 00:15:22.757 { 00:15:22.757 "nsid": 1, 00:15:22.757 "bdev_name": "Malloc2", 00:15:22.757 "name": "Malloc2", 00:15:22.757 "nguid": "6C08EB666553419B9387E97178CFA4F5", 00:15:22.757 "uuid": "6c08eb66-6553-419b-9387-e97178cfa4f5" 00:15:22.757 }, 00:15:22.757 { 00:15:22.757 "nsid": 2, 00:15:22.757 "bdev_name": "Malloc4", 00:15:22.757 "name": "Malloc4", 00:15:22.757 "nguid": "893287F55B014D24A5118A27872489AB", 00:15:22.757 "uuid": "893287f5-5b01-4d24-a511-8a27872489ab" 00:15:22.757 } 00:15:22.757 ] 00:15:22.757 } 00:15:22.757 ] 00:15:22.757 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3381839 00:15:22.758 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:22.758 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3372759 00:15:22.758 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3372759 ']' 00:15:22.758 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3372759 00:15:22.758 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:22.758 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.758 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3372759 00:15:23.017 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.017 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.017 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3372759' 00:15:23.017 killing process with pid 3372759 00:15:23.017 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3372759 00:15:23.017 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3372759 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3382072 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3382072' 00:15:23.017 Process pid: 3382072 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3382072 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3382072 ']' 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.017 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:23.017 [2024-12-06 11:14:29.177989] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:23.017 [2024-12-06 11:14:29.178926] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:23.017 [2024-12-06 11:14:29.178970] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.277 [2024-12-06 11:14:29.257207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.277 [2024-12-06 11:14:29.292449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.277 [2024-12-06 11:14:29.292484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.277 [2024-12-06 11:14:29.292492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.277 [2024-12-06 11:14:29.292498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.277 [2024-12-06 11:14:29.292505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.277 [2024-12-06 11:14:29.294197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.277 [2024-12-06 11:14:29.294338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.277 [2024-12-06 11:14:29.294492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.277 [2024-12-06 11:14:29.294493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.277 [2024-12-06 11:14:29.350680] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:23.277 [2024-12-06 11:14:29.350749] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:23.277 [2024-12-06 11:14:29.351635] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:23.277 [2024-12-06 11:14:29.352363] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:23.277 [2024-12-06 11:14:29.352434] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:23.848 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.848 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:23.848 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:25.230 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:25.230 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:25.230 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:25.230 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:25.230 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:25.230 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:25.230 Malloc1 00:15:25.230 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:25.490 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:25.750 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:26.010 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:26.010 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:26.010 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:26.010 Malloc2 00:15:26.010 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:26.270 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:26.529 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3382072 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3382072 ']' 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3382072 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3382072 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3382072' 00:15:26.789 killing process with pid 3382072 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3382072 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3382072 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:26.789 00:15:26.789 real 0m51.477s 00:15:26.789 user 3m17.296s 00:15:26.789 sys 0m2.845s 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:26.789 ************************************ 00:15:26.789 END TEST nvmf_vfio_user 00:15:26.789 ************************************ 00:15:26.789 11:14:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:27.050 11:14:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.050 11:14:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.050 11:14:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.050 ************************************ 00:15:27.050 START TEST nvmf_vfio_user_nvme_compliance 00:15:27.050 ************************************ 00:15:27.050 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:27.050 * Looking for test storage... 00:15:27.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:27.050 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:27.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.051 --rc genhtml_branch_coverage=1 00:15:27.051 --rc genhtml_function_coverage=1 00:15:27.051 --rc genhtml_legend=1 00:15:27.051 --rc geninfo_all_blocks=1 00:15:27.051 --rc geninfo_unexecuted_blocks=1 00:15:27.051 00:15:27.051 ' 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:27.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.051 --rc genhtml_branch_coverage=1 00:15:27.051 --rc genhtml_function_coverage=1 00:15:27.051 --rc genhtml_legend=1 00:15:27.051 --rc geninfo_all_blocks=1 00:15:27.051 --rc geninfo_unexecuted_blocks=1 00:15:27.051 00:15:27.051 ' 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:27.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.051 --rc genhtml_branch_coverage=1 00:15:27.051 --rc genhtml_function_coverage=1 00:15:27.051 --rc genhtml_legend=1 00:15:27.051 --rc geninfo_all_blocks=1 00:15:27.051 --rc geninfo_unexecuted_blocks=1 00:15:27.051 00:15:27.051 ' 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:27.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.051 --rc genhtml_branch_coverage=1 00:15:27.051 --rc genhtml_function_coverage=1 00:15:27.051 --rc genhtml_legend=1 00:15:27.051 --rc geninfo_all_blocks=1 00:15:27.051 --rc geninfo_unexecuted_blocks=1 00:15:27.051 00:15:27.051 ' 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.051 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3382932 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3382932' 00:15:27.052 Process pid: 3382932 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3382932 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3382932 ']' 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:27.052 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:27.312 [2024-12-06 11:14:33.270262] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:27.312 [2024-12-06 11:14:33.270338] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.312 [2024-12-06 11:14:33.353555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.312 [2024-12-06 11:14:33.394397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.312 [2024-12-06 11:14:33.394434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.312 [2024-12-06 11:14:33.394442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.312 [2024-12-06 11:14:33.394449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.312 [2024-12-06 11:14:33.394455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.312 [2024-12-06 11:14:33.395888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.312 [2024-12-06 11:14:33.395898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.312 [2024-12-06 11:14:33.395921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.251 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.251 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:28.251 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.188 malloc0 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:29.188 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.189 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.189 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.189 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:29.189 00:15:29.189 00:15:29.189 CUnit - A unit testing framework for C - Version 2.1-3 00:15:29.189 http://cunit.sourceforge.net/ 00:15:29.189 00:15:29.189 00:15:29.189 Suite: nvme_compliance 00:15:29.189 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 11:14:35.354330] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.447 [2024-12-06 11:14:35.355689] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:29.447 [2024-12-06 11:14:35.355702] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:29.447 [2024-12-06 11:14:35.355707] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:29.447 [2024-12-06 11:14:35.357351] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.447 passed 00:15:29.448 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 11:14:35.453940] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.448 [2024-12-06 11:14:35.456957] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.448 passed 00:15:29.448 Test: admin_identify_ns ...[2024-12-06 11:14:35.553110] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.448 [2024-12-06 11:14:35.612873] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:29.707 [2024-12-06 11:14:35.620874] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:29.707 [2024-12-06 11:14:35.641988] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.707 passed 00:15:29.707 Test: admin_get_features_mandatory_features ...[2024-12-06 11:14:35.737021] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.707 [2024-12-06 11:14:35.740042] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.707 passed 00:15:29.707 Test: admin_get_features_optional_features ...[2024-12-06 11:14:35.834580] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.707 [2024-12-06 11:14:35.837593] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.707 passed 00:15:29.967 Test: admin_set_features_number_of_queues ...[2024-12-06 11:14:35.929715] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.967 [2024-12-06 11:14:36.033971] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.967 passed 00:15:29.967 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 11:14:36.127632] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.967 [2024-12-06 11:14:36.130645] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.226 passed 00:15:30.226 Test: admin_get_log_page_with_lpo ...[2024-12-06 11:14:36.222781] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.226 [2024-12-06 11:14:36.291874] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:30.226 [2024-12-06 11:14:36.304939] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.226 passed 00:15:30.486 Test: fabric_property_get ...[2024-12-06 11:14:36.395550] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.486 [2024-12-06 11:14:36.396804] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:30.486 [2024-12-06 11:14:36.398573] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.486 passed 00:15:30.486 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 11:14:36.494178] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.486 [2024-12-06 11:14:36.495425] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:30.487 [2024-12-06 11:14:36.497195] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.487 passed 00:15:30.487 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 11:14:36.589106] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.745 [2024-12-06 11:14:36.672869] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:30.745 [2024-12-06 11:14:36.688870] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:30.746 [2024-12-06 11:14:36.693949] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.746 passed 00:15:30.746 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 11:14:36.787939] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.746 [2024-12-06 11:14:36.789181] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:30.746 [2024-12-06 11:14:36.790962] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.746 passed 00:15:30.746 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 11:14:36.884116] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.005 [2024-12-06 11:14:36.959867] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:31.005 [2024-12-06 11:14:36.983867] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:31.005 [2024-12-06 11:14:36.988951] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.005 passed 00:15:31.005 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 11:14:37.082967] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.005 [2024-12-06 11:14:37.084218] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:31.005 [2024-12-06 11:14:37.084239] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:31.005 [2024-12-06 11:14:37.085986] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.005 passed 00:15:31.273 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 11:14:37.179099] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.273 [2024-12-06 11:14:37.270873] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:31.273 [2024-12-06 11:14:37.278872] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:31.273 [2024-12-06 11:14:37.286868] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:31.273 [2024-12-06 11:14:37.294869] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:31.273 [2024-12-06 11:14:37.323945] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.273 passed 00:15:31.273 Test: admin_create_io_sq_verify_pc ...[2024-12-06 11:14:37.417960] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.273 [2024-12-06 11:14:37.436878] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:31.532 [2024-12-06 11:14:37.454106] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.532 passed 00:15:31.532 Test: admin_create_io_qp_max_qps ...[2024-12-06 11:14:37.545614] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.909 [2024-12-06 11:14:38.663872] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:32.909 [2024-12-06 11:14:39.059828] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.169 passed 00:15:33.169 Test: admin_create_io_sq_shared_cq ...[2024-12-06 11:14:39.151002] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.169 [2024-12-06 11:14:39.280868] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:33.169 [2024-12-06 11:14:39.317928] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.429 passed 00:15:33.429 00:15:33.429 Run Summary: Type Total Ran Passed Failed Inactive 00:15:33.429 suites 1 1 n/a 0 0 00:15:33.429 tests 18 18 18 0 0 00:15:33.429 asserts 360 360 360 0 n/a 00:15:33.429 00:15:33.429 Elapsed time = 1.662 seconds 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3382932 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3382932 ']' 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3382932 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3382932 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3382932' 00:15:33.429 killing process with pid 3382932 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3382932 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3382932 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:33.429 00:15:33.429 real 0m6.576s 00:15:33.429 user 0m18.704s 00:15:33.429 sys 0m0.553s 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.429 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.429 ************************************ 00:15:33.429 END TEST nvmf_vfio_user_nvme_compliance 00:15:33.429 ************************************ 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.689 ************************************ 00:15:33.689 START TEST nvmf_vfio_user_fuzz 00:15:33.689 ************************************ 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:33.689 * Looking for test storage... 00:15:33.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:33.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.689 --rc genhtml_branch_coverage=1 00:15:33.689 --rc genhtml_function_coverage=1 00:15:33.689 --rc genhtml_legend=1 00:15:33.689 --rc geninfo_all_blocks=1 00:15:33.689 --rc geninfo_unexecuted_blocks=1 00:15:33.689 00:15:33.689 ' 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:33.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.689 --rc genhtml_branch_coverage=1 00:15:33.689 --rc genhtml_function_coverage=1 00:15:33.689 --rc genhtml_legend=1 00:15:33.689 --rc geninfo_all_blocks=1 00:15:33.689 --rc geninfo_unexecuted_blocks=1 00:15:33.689 00:15:33.689 ' 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:33.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.689 --rc genhtml_branch_coverage=1 00:15:33.689 --rc genhtml_function_coverage=1 00:15:33.689 --rc genhtml_legend=1 00:15:33.689 --rc geninfo_all_blocks=1 00:15:33.689 --rc geninfo_unexecuted_blocks=1 00:15:33.689 00:15:33.689 ' 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:33.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.689 --rc genhtml_branch_coverage=1 00:15:33.689 --rc genhtml_function_coverage=1 00:15:33.689 --rc genhtml_legend=1 00:15:33.689 --rc geninfo_all_blocks=1 00:15:33.689 --rc geninfo_unexecuted_blocks=1 00:15:33.689 00:15:33.689 ' 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.689 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:33.949 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:33.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3384334 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3384334' 00:15:33.950 Process pid: 3384334 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3384334 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3384334 ']' 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.950 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.890 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.890 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:34.890 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.830 malloc0 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.830 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.831 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:35.831 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.831 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.831 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.831 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:35.831 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:07.935 Fuzzing completed. Shutting down the fuzz application 00:16:07.935 00:16:07.935 Dumping successful admin opcodes: 00:16:07.935 9, 10, 00:16:07.935 Dumping successful io opcodes: 00:16:07.935 0, 00:16:07.935 NS: 0x20000081ef00 I/O qp, Total commands completed: 1127300, total successful commands: 4440, random_seed: 665869120 00:16:07.935 NS: 0x20000081ef00 admin qp, Total commands completed: 142880, total successful commands: 32, random_seed: 3822725376 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3384334 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3384334 ']' 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3384334 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3384334 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3384334' 00:16:07.935 killing process with pid 3384334 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3384334 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3384334 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:07.935 00:16:07.935 real 0m32.803s 00:16:07.935 user 0m36.089s 00:16:07.935 sys 0m26.666s 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.935 ************************************ 00:16:07.935 END TEST nvmf_vfio_user_fuzz 00:16:07.935 ************************************ 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:07.935 ************************************ 00:16:07.935 START TEST nvmf_auth_target 00:16:07.935 ************************************ 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:07.935 * Looking for test storage... 00:16:07.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:07.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.935 --rc genhtml_branch_coverage=1 00:16:07.935 --rc genhtml_function_coverage=1 00:16:07.935 --rc genhtml_legend=1 00:16:07.935 --rc geninfo_all_blocks=1 00:16:07.935 --rc geninfo_unexecuted_blocks=1 00:16:07.935 00:16:07.935 ' 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:07.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.935 --rc genhtml_branch_coverage=1 00:16:07.935 --rc genhtml_function_coverage=1 00:16:07.935 --rc genhtml_legend=1 00:16:07.935 --rc geninfo_all_blocks=1 00:16:07.935 --rc geninfo_unexecuted_blocks=1 00:16:07.935 00:16:07.935 ' 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:07.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.935 --rc genhtml_branch_coverage=1 00:16:07.935 --rc genhtml_function_coverage=1 00:16:07.935 --rc genhtml_legend=1 00:16:07.935 --rc geninfo_all_blocks=1 00:16:07.935 --rc geninfo_unexecuted_blocks=1 00:16:07.935 00:16:07.935 ' 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:07.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.935 --rc genhtml_branch_coverage=1 00:16:07.935 --rc genhtml_function_coverage=1 00:16:07.935 --rc genhtml_legend=1 00:16:07.935 --rc geninfo_all_blocks=1 00:16:07.935 --rc geninfo_unexecuted_blocks=1 00:16:07.935 00:16:07.935 ' 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.935 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:07.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:07.936 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:14.761 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:14.762 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:14.762 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:14.762 Found net devices under 0000:31:00.0: cvl_0_0 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:14.762 Found net devices under 0000:31:00.1: cvl_0_1 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.762 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.024 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.024 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.024 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:15.024 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:15.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:16:15.284 00:16:15.284 --- 10.0.0.2 ping statistics --- 00:16:15.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.284 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:16:15.284 00:16:15.284 --- 10.0.0.1 ping statistics --- 00:16:15.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.284 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3395548 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3395548 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3395548 ']' 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.284 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.285 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.285 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.285 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3395588 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=adf1dc3b1e2fb0f5fcf25ec081087b1f585708713f87c502 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RRY 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key adf1dc3b1e2fb0f5fcf25ec081087b1f585708713f87c502 0 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 adf1dc3b1e2fb0f5fcf25ec081087b1f585708713f87c502 0 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=adf1dc3b1e2fb0f5fcf25ec081087b1f585708713f87c502 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RRY 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RRY 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.RRY 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6bc09948ccbf88251bff0b6642f30bdc241cdd5265aa6036cbd4103237c14bd1 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oSa 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6bc09948ccbf88251bff0b6642f30bdc241cdd5265aa6036cbd4103237c14bd1 3 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6bc09948ccbf88251bff0b6642f30bdc241cdd5265aa6036cbd4103237c14bd1 3 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6bc09948ccbf88251bff0b6642f30bdc241cdd5265aa6036cbd4103237c14bd1 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oSa 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oSa 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.oSa 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.226 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=70e687b8e2e13d58629b366c057c42a2 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gtV 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 70e687b8e2e13d58629b366c057c42a2 1 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 70e687b8e2e13d58629b366c057c42a2 1 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=70e687b8e2e13d58629b366c057c42a2 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gtV 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gtV 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.gtV 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:16.227 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=44f563ea447b45417b93f88ac34446590716c3825eb4d8bc 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Yz8 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 44f563ea447b45417b93f88ac34446590716c3825eb4d8bc 2 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 44f563ea447b45417b93f88ac34446590716c3825eb4d8bc 2 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=44f563ea447b45417b93f88ac34446590716c3825eb4d8bc 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Yz8 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Yz8 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Yz8 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f5f9bd22610c61ed9a5fc9e6955c0b24629a84a9a1b90f05 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SQB 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f5f9bd22610c61ed9a5fc9e6955c0b24629a84a9a1b90f05 2 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f5f9bd22610c61ed9a5fc9e6955c0b24629a84a9a1b90f05 2 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f5f9bd22610c61ed9a5fc9e6955c0b24629a84a9a1b90f05 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SQB 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SQB 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.SQB 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1fe57e0eb8a2583c62f3b3ba17777ee4 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.nZv 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1fe57e0eb8a2583c62f3b3ba17777ee4 1 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1fe57e0eb8a2583c62f3b3ba17777ee4 1 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1fe57e0eb8a2583c62f3b3ba17777ee4 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.nZv 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.nZv 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.nZv 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0b10642e20727cbc0fb896f27b67e90cbad5f386dd2bf6f2bb01bea7a02dfe43 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ara 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0b10642e20727cbc0fb896f27b67e90cbad5f386dd2bf6f2bb01bea7a02dfe43 3 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0b10642e20727cbc0fb896f27b67e90cbad5f386dd2bf6f2bb01bea7a02dfe43 3 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0b10642e20727cbc0fb896f27b67e90cbad5f386dd2bf6f2bb01bea7a02dfe43 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ara 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ara 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Ara 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3395548 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3395548 ']' 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.488 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.749 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.749 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:16.749 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3395588 /var/tmp/host.sock 00:16:16.749 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3395588 ']' 00:16:16.749 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:16.749 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.749 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:16.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:16.749 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.749 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.009 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.009 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:17.009 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:17.009 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.009 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.009 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.009 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:17.009 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RRY 00:16:17.009 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.009 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.009 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.009 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.RRY 00:16:17.009 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.RRY 00:16:17.270 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.oSa ]] 00:16:17.270 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oSa 00:16:17.270 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.270 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.270 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.270 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oSa 00:16:17.271 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oSa 00:16:17.271 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:17.271 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gtV 00:16:17.271 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.271 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.271 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.271 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.gtV 00:16:17.271 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.gtV 00:16:17.532 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Yz8 ]] 00:16:17.532 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yz8 00:16:17.532 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.532 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.532 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.532 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yz8 00:16:17.532 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yz8 00:16:17.793 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:17.793 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.SQB 00:16:17.793 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.793 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.793 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.793 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.SQB 00:16:17.793 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.SQB 00:16:18.055 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.nZv ]] 00:16:18.055 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nZv 00:16:18.055 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.055 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.055 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.055 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nZv 00:16:18.055 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nZv 00:16:18.055 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:18.055 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Ara 00:16:18.055 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.055 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.055 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.055 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Ara 00:16:18.055 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Ara 00:16:18.316 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:18.316 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:18.316 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.316 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.316 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.316 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.577 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.838 00:16:18.838 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.838 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.838 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.838 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.838 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.838 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.838 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.838 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.838 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.838 { 00:16:18.838 "cntlid": 1, 00:16:18.838 "qid": 0, 00:16:18.838 "state": "enabled", 00:16:18.838 "thread": "nvmf_tgt_poll_group_000", 00:16:18.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:18.838 "listen_address": { 00:16:18.838 "trtype": "TCP", 00:16:18.838 "adrfam": "IPv4", 00:16:18.838 "traddr": "10.0.0.2", 00:16:18.838 "trsvcid": "4420" 00:16:18.838 }, 00:16:18.838 "peer_address": { 00:16:18.838 "trtype": "TCP", 00:16:18.838 "adrfam": "IPv4", 00:16:18.838 "traddr": "10.0.0.1", 00:16:18.838 "trsvcid": "40562" 00:16:18.838 }, 00:16:18.838 "auth": { 00:16:18.838 "state": "completed", 00:16:18.838 "digest": "sha256", 00:16:18.838 "dhgroup": "null" 00:16:18.838 } 00:16:18.838 } 00:16:18.838 ]' 00:16:18.838 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.105 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.105 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.105 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.105 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.105 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.105 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.105 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.366 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:19.367 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:19.937 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.937 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:19.937 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.937 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.937 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.937 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.937 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.937 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.198 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.459 00:16:20.459 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.459 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.459 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.719 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.719 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.719 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.719 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.719 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.719 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.719 { 00:16:20.719 "cntlid": 3, 00:16:20.719 "qid": 0, 00:16:20.720 "state": "enabled", 00:16:20.720 "thread": "nvmf_tgt_poll_group_000", 00:16:20.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:20.720 "listen_address": { 00:16:20.720 "trtype": "TCP", 00:16:20.720 "adrfam": "IPv4", 00:16:20.720 "traddr": "10.0.0.2", 00:16:20.720 "trsvcid": "4420" 00:16:20.720 }, 00:16:20.720 "peer_address": { 00:16:20.720 "trtype": "TCP", 00:16:20.720 "adrfam": "IPv4", 00:16:20.720 "traddr": "10.0.0.1", 00:16:20.720 "trsvcid": "40592" 00:16:20.720 }, 00:16:20.720 "auth": { 00:16:20.720 "state": "completed", 00:16:20.720 "digest": "sha256", 00:16:20.720 "dhgroup": "null" 00:16:20.720 } 00:16:20.720 } 00:16:20.720 ]' 00:16:20.720 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.720 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.720 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.720 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.720 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.720 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.720 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.720 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.980 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:20.980 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:21.920 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.920 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:21.920 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.920 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.920 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.920 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.920 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.921 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.181 00:16:22.181 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.181 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.181 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.181 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.441 { 00:16:22.441 "cntlid": 5, 00:16:22.441 "qid": 0, 00:16:22.441 "state": "enabled", 00:16:22.441 "thread": "nvmf_tgt_poll_group_000", 00:16:22.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:22.441 "listen_address": { 00:16:22.441 "trtype": "TCP", 00:16:22.441 "adrfam": "IPv4", 00:16:22.441 "traddr": "10.0.0.2", 00:16:22.441 "trsvcid": "4420" 00:16:22.441 }, 00:16:22.441 "peer_address": { 00:16:22.441 "trtype": "TCP", 00:16:22.441 "adrfam": "IPv4", 00:16:22.441 "traddr": "10.0.0.1", 00:16:22.441 "trsvcid": "40618" 00:16:22.441 }, 00:16:22.441 "auth": { 00:16:22.441 "state": "completed", 00:16:22.441 "digest": "sha256", 00:16:22.441 "dhgroup": "null" 00:16:22.441 } 00:16:22.441 } 00:16:22.441 ]' 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.441 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.701 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:22.701 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:23.271 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.271 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:23.271 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.271 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.271 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.271 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.271 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.271 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:23.531 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.532 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.791 00:16:23.791 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.791 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.791 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.052 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.052 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.052 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.052 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.052 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.052 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.052 { 00:16:24.052 "cntlid": 7, 00:16:24.053 "qid": 0, 00:16:24.053 "state": "enabled", 00:16:24.053 "thread": "nvmf_tgt_poll_group_000", 00:16:24.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:24.053 "listen_address": { 00:16:24.053 "trtype": "TCP", 00:16:24.053 "adrfam": "IPv4", 00:16:24.053 "traddr": "10.0.0.2", 00:16:24.053 "trsvcid": "4420" 00:16:24.053 }, 00:16:24.053 "peer_address": { 00:16:24.053 "trtype": "TCP", 00:16:24.053 "adrfam": "IPv4", 00:16:24.053 "traddr": "10.0.0.1", 00:16:24.053 "trsvcid": "40644" 00:16:24.053 }, 00:16:24.053 "auth": { 00:16:24.053 "state": "completed", 00:16:24.053 "digest": "sha256", 00:16:24.053 "dhgroup": "null" 00:16:24.053 } 00:16:24.053 } 00:16:24.053 ]' 00:16:24.053 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.053 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.053 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.053 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.053 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.053 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.053 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.053 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.314 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:16:24.314 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:16:24.883 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.883 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:24.883 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.883 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.883 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.142 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.142 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.142 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.142 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.142 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:25.142 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.142 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.142 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.142 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:25.143 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.143 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.143 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.143 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.143 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.143 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.143 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.143 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.403 00:16:25.403 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.403 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.403 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.664 { 00:16:25.664 "cntlid": 9, 00:16:25.664 "qid": 0, 00:16:25.664 "state": "enabled", 00:16:25.664 "thread": "nvmf_tgt_poll_group_000", 00:16:25.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:25.664 "listen_address": { 00:16:25.664 "trtype": "TCP", 00:16:25.664 "adrfam": "IPv4", 00:16:25.664 "traddr": "10.0.0.2", 00:16:25.664 "trsvcid": "4420" 00:16:25.664 }, 00:16:25.664 "peer_address": { 00:16:25.664 "trtype": "TCP", 00:16:25.664 "adrfam": "IPv4", 00:16:25.664 "traddr": "10.0.0.1", 00:16:25.664 "trsvcid": "40666" 00:16:25.664 }, 00:16:25.664 "auth": { 00:16:25.664 "state": "completed", 00:16:25.664 "digest": "sha256", 00:16:25.664 "dhgroup": "ffdhe2048" 00:16:25.664 } 00:16:25.664 } 00:16:25.664 ]' 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.664 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.925 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:25.925 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.868 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.129 00:16:27.129 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.129 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.129 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.390 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.390 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.390 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.390 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.390 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.390 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.390 { 00:16:27.390 "cntlid": 11, 00:16:27.390 "qid": 0, 00:16:27.390 "state": "enabled", 00:16:27.390 "thread": "nvmf_tgt_poll_group_000", 00:16:27.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:27.390 "listen_address": { 00:16:27.390 "trtype": "TCP", 00:16:27.390 "adrfam": "IPv4", 00:16:27.390 "traddr": "10.0.0.2", 00:16:27.390 "trsvcid": "4420" 00:16:27.390 }, 00:16:27.390 "peer_address": { 00:16:27.390 "trtype": "TCP", 00:16:27.391 "adrfam": "IPv4", 00:16:27.391 "traddr": "10.0.0.1", 00:16:27.391 "trsvcid": "34294" 00:16:27.391 }, 00:16:27.391 "auth": { 00:16:27.391 "state": "completed", 00:16:27.391 "digest": "sha256", 00:16:27.391 "dhgroup": "ffdhe2048" 00:16:27.391 } 00:16:27.391 } 00:16:27.391 ]' 00:16:27.391 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.391 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.391 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.391 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.391 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.391 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.391 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.391 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.652 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:27.652 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.594 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.856 00:16:28.856 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.856 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.856 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.116 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.117 { 00:16:29.117 "cntlid": 13, 00:16:29.117 "qid": 0, 00:16:29.117 "state": "enabled", 00:16:29.117 "thread": "nvmf_tgt_poll_group_000", 00:16:29.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:29.117 "listen_address": { 00:16:29.117 "trtype": "TCP", 00:16:29.117 "adrfam": "IPv4", 00:16:29.117 "traddr": "10.0.0.2", 00:16:29.117 "trsvcid": "4420" 00:16:29.117 }, 00:16:29.117 "peer_address": { 00:16:29.117 "trtype": "TCP", 00:16:29.117 "adrfam": "IPv4", 00:16:29.117 "traddr": "10.0.0.1", 00:16:29.117 "trsvcid": "34312" 00:16:29.117 }, 00:16:29.117 "auth": { 00:16:29.117 "state": "completed", 00:16:29.117 "digest": "sha256", 00:16:29.117 "dhgroup": "ffdhe2048" 00:16:29.117 } 00:16:29.117 } 00:16:29.117 ]' 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.117 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.378 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:29.378 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:29.949 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.949 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:29.949 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.949 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.949 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.949 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.949 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.949 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.210 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.471 00:16:30.471 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.471 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.471 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.732 { 00:16:30.732 "cntlid": 15, 00:16:30.732 "qid": 0, 00:16:30.732 "state": "enabled", 00:16:30.732 "thread": "nvmf_tgt_poll_group_000", 00:16:30.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:30.732 "listen_address": { 00:16:30.732 "trtype": "TCP", 00:16:30.732 "adrfam": "IPv4", 00:16:30.732 "traddr": "10.0.0.2", 00:16:30.732 "trsvcid": "4420" 00:16:30.732 }, 00:16:30.732 "peer_address": { 00:16:30.732 "trtype": "TCP", 00:16:30.732 "adrfam": "IPv4", 00:16:30.732 "traddr": "10.0.0.1", 00:16:30.732 "trsvcid": "34326" 00:16:30.732 }, 00:16:30.732 "auth": { 00:16:30.732 "state": "completed", 00:16:30.732 "digest": "sha256", 00:16:30.732 "dhgroup": "ffdhe2048" 00:16:30.732 } 00:16:30.732 } 00:16:30.732 ]' 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.732 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.992 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:16:30.992 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:16:31.564 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.564 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.564 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.564 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.565 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.565 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.565 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.565 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.565 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.826 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.087 00:16:32.087 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.087 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.087 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.357 { 00:16:32.357 "cntlid": 17, 00:16:32.357 "qid": 0, 00:16:32.357 "state": "enabled", 00:16:32.357 "thread": "nvmf_tgt_poll_group_000", 00:16:32.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:32.357 "listen_address": { 00:16:32.357 "trtype": "TCP", 00:16:32.357 "adrfam": "IPv4", 00:16:32.357 "traddr": "10.0.0.2", 00:16:32.357 "trsvcid": "4420" 00:16:32.357 }, 00:16:32.357 "peer_address": { 00:16:32.357 "trtype": "TCP", 00:16:32.357 "adrfam": "IPv4", 00:16:32.357 "traddr": "10.0.0.1", 00:16:32.357 "trsvcid": "34364" 00:16:32.357 }, 00:16:32.357 "auth": { 00:16:32.357 "state": "completed", 00:16:32.357 "digest": "sha256", 00:16:32.357 "dhgroup": "ffdhe3072" 00:16:32.357 } 00:16:32.357 } 00:16:32.357 ]' 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.357 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.358 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.626 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:32.626 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.566 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.826 00:16:33.826 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.826 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.826 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.087 { 00:16:34.087 "cntlid": 19, 00:16:34.087 "qid": 0, 00:16:34.087 "state": "enabled", 00:16:34.087 "thread": "nvmf_tgt_poll_group_000", 00:16:34.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:34.087 "listen_address": { 00:16:34.087 "trtype": "TCP", 00:16:34.087 "adrfam": "IPv4", 00:16:34.087 "traddr": "10.0.0.2", 00:16:34.087 "trsvcid": "4420" 00:16:34.087 }, 00:16:34.087 "peer_address": { 00:16:34.087 "trtype": "TCP", 00:16:34.087 "adrfam": "IPv4", 00:16:34.087 "traddr": "10.0.0.1", 00:16:34.087 "trsvcid": "34384" 00:16:34.087 }, 00:16:34.087 "auth": { 00:16:34.087 "state": "completed", 00:16:34.087 "digest": "sha256", 00:16:34.087 "dhgroup": "ffdhe3072" 00:16:34.087 } 00:16:34.087 } 00:16:34.087 ]' 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.087 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.088 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.088 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.348 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:34.348 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.291 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.292 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.292 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.552 00:16:35.552 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.552 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.552 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.814 { 00:16:35.814 "cntlid": 21, 00:16:35.814 "qid": 0, 00:16:35.814 "state": "enabled", 00:16:35.814 "thread": "nvmf_tgt_poll_group_000", 00:16:35.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:35.814 "listen_address": { 00:16:35.814 "trtype": "TCP", 00:16:35.814 "adrfam": "IPv4", 00:16:35.814 "traddr": "10.0.0.2", 00:16:35.814 "trsvcid": "4420" 00:16:35.814 }, 00:16:35.814 "peer_address": { 00:16:35.814 "trtype": "TCP", 00:16:35.814 "adrfam": "IPv4", 00:16:35.814 "traddr": "10.0.0.1", 00:16:35.814 "trsvcid": "34404" 00:16:35.814 }, 00:16:35.814 "auth": { 00:16:35.814 "state": "completed", 00:16:35.814 "digest": "sha256", 00:16:35.814 "dhgroup": "ffdhe3072" 00:16:35.814 } 00:16:35.814 } 00:16:35.814 ]' 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.814 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.075 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:36.075 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:37.017 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.017 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:37.017 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.017 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.017 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.017 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.017 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.017 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.017 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.278 00:16:37.278 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.278 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.278 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.540 { 00:16:37.540 "cntlid": 23, 00:16:37.540 "qid": 0, 00:16:37.540 "state": "enabled", 00:16:37.540 "thread": "nvmf_tgt_poll_group_000", 00:16:37.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:37.540 "listen_address": { 00:16:37.540 "trtype": "TCP", 00:16:37.540 "adrfam": "IPv4", 00:16:37.540 "traddr": "10.0.0.2", 00:16:37.540 "trsvcid": "4420" 00:16:37.540 }, 00:16:37.540 "peer_address": { 00:16:37.540 "trtype": "TCP", 00:16:37.540 "adrfam": "IPv4", 00:16:37.540 "traddr": "10.0.0.1", 00:16:37.540 "trsvcid": "38442" 00:16:37.540 }, 00:16:37.540 "auth": { 00:16:37.540 "state": "completed", 00:16:37.540 "digest": "sha256", 00:16:37.540 "dhgroup": "ffdhe3072" 00:16:37.540 } 00:16:37.540 } 00:16:37.540 ]' 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.540 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.801 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:16:37.801 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:16:38.372 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.372 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:38.372 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.372 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.372 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.372 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.372 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.372 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.372 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.634 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.895 00:16:38.895 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.895 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.895 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.155 { 00:16:39.155 "cntlid": 25, 00:16:39.155 "qid": 0, 00:16:39.155 "state": "enabled", 00:16:39.155 "thread": "nvmf_tgt_poll_group_000", 00:16:39.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:39.155 "listen_address": { 00:16:39.155 "trtype": "TCP", 00:16:39.155 "adrfam": "IPv4", 00:16:39.155 "traddr": "10.0.0.2", 00:16:39.155 "trsvcid": "4420" 00:16:39.155 }, 00:16:39.155 "peer_address": { 00:16:39.155 "trtype": "TCP", 00:16:39.155 "adrfam": "IPv4", 00:16:39.155 "traddr": "10.0.0.1", 00:16:39.155 "trsvcid": "38466" 00:16:39.155 }, 00:16:39.155 "auth": { 00:16:39.155 "state": "completed", 00:16:39.155 "digest": "sha256", 00:16:39.155 "dhgroup": "ffdhe4096" 00:16:39.155 } 00:16:39.155 } 00:16:39.155 ]' 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.155 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.415 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:39.415 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:40.070 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.070 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:40.070 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.070 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.070 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.070 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.070 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.070 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.331 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.593 00:16:40.593 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.593 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.593 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.854 { 00:16:40.854 "cntlid": 27, 00:16:40.854 "qid": 0, 00:16:40.854 "state": "enabled", 00:16:40.854 "thread": "nvmf_tgt_poll_group_000", 00:16:40.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:40.854 "listen_address": { 00:16:40.854 "trtype": "TCP", 00:16:40.854 "adrfam": "IPv4", 00:16:40.854 "traddr": "10.0.0.2", 00:16:40.854 "trsvcid": "4420" 00:16:40.854 }, 00:16:40.854 "peer_address": { 00:16:40.854 "trtype": "TCP", 00:16:40.854 "adrfam": "IPv4", 00:16:40.854 "traddr": "10.0.0.1", 00:16:40.854 "trsvcid": "38494" 00:16:40.854 }, 00:16:40.854 "auth": { 00:16:40.854 "state": "completed", 00:16:40.854 "digest": "sha256", 00:16:40.854 "dhgroup": "ffdhe4096" 00:16:40.854 } 00:16:40.854 } 00:16:40.854 ]' 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.854 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.115 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:41.115 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:41.684 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.944 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:41.944 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.944 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.944 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.944 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.944 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.944 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.944 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.945 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.205 00:16:42.205 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.205 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.205 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.465 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.465 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.465 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.465 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.465 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.465 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.465 { 00:16:42.465 "cntlid": 29, 00:16:42.465 "qid": 0, 00:16:42.465 "state": "enabled", 00:16:42.465 "thread": "nvmf_tgt_poll_group_000", 00:16:42.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:42.465 "listen_address": { 00:16:42.465 "trtype": "TCP", 00:16:42.465 "adrfam": "IPv4", 00:16:42.465 "traddr": "10.0.0.2", 00:16:42.465 "trsvcid": "4420" 00:16:42.465 }, 00:16:42.465 "peer_address": { 00:16:42.465 "trtype": "TCP", 00:16:42.465 "adrfam": "IPv4", 00:16:42.465 "traddr": "10.0.0.1", 00:16:42.465 "trsvcid": "38524" 00:16:42.465 }, 00:16:42.465 "auth": { 00:16:42.465 "state": "completed", 00:16:42.465 "digest": "sha256", 00:16:42.465 "dhgroup": "ffdhe4096" 00:16:42.465 } 00:16:42.465 } 00:16:42.465 ]' 00:16:42.465 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.465 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.465 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.465 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.465 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.726 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.726 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.726 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.726 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:42.726 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.666 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.667 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:43.667 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.667 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.667 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.667 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.667 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.667 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.927 00:16:43.927 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.927 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.927 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.188 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.188 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.188 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.188 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.188 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.188 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.188 { 00:16:44.188 "cntlid": 31, 00:16:44.188 "qid": 0, 00:16:44.188 "state": "enabled", 00:16:44.188 "thread": "nvmf_tgt_poll_group_000", 00:16:44.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:44.188 "listen_address": { 00:16:44.188 "trtype": "TCP", 00:16:44.188 "adrfam": "IPv4", 00:16:44.188 "traddr": "10.0.0.2", 00:16:44.188 "trsvcid": "4420" 00:16:44.188 }, 00:16:44.188 "peer_address": { 00:16:44.188 "trtype": "TCP", 00:16:44.188 "adrfam": "IPv4", 00:16:44.188 "traddr": "10.0.0.1", 00:16:44.188 "trsvcid": "38556" 00:16:44.188 }, 00:16:44.188 "auth": { 00:16:44.188 "state": "completed", 00:16:44.188 "digest": "sha256", 00:16:44.188 "dhgroup": "ffdhe4096" 00:16:44.188 } 00:16:44.188 } 00:16:44.188 ]' 00:16:44.188 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.188 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.188 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.188 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.188 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.450 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.450 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.450 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.450 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:16:44.450 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:16:45.390 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.390 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:45.390 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.390 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.390 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.390 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.391 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.963 00:16:45.963 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.963 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.963 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.963 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.963 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.963 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.963 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.963 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.963 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.963 { 00:16:45.963 "cntlid": 33, 00:16:45.963 "qid": 0, 00:16:45.963 "state": "enabled", 00:16:45.963 "thread": "nvmf_tgt_poll_group_000", 00:16:45.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:45.963 "listen_address": { 00:16:45.963 "trtype": "TCP", 00:16:45.963 "adrfam": "IPv4", 00:16:45.963 "traddr": "10.0.0.2", 00:16:45.963 "trsvcid": "4420" 00:16:45.963 }, 00:16:45.963 "peer_address": { 00:16:45.963 "trtype": "TCP", 00:16:45.963 "adrfam": "IPv4", 00:16:45.963 "traddr": "10.0.0.1", 00:16:45.963 "trsvcid": "47288" 00:16:45.963 }, 00:16:45.963 "auth": { 00:16:45.963 "state": "completed", 00:16:45.963 "digest": "sha256", 00:16:45.963 "dhgroup": "ffdhe6144" 00:16:45.963 } 00:16:45.963 } 00:16:45.963 ]' 00:16:45.963 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.963 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.963 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.224 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.224 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.224 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.224 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.224 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.225 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:46.225 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.167 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.428 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.428 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.428 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.428 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.689 00:16:47.689 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.689 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.689 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.951 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.951 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.951 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.951 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.951 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.951 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.951 { 00:16:47.951 "cntlid": 35, 00:16:47.951 "qid": 0, 00:16:47.951 "state": "enabled", 00:16:47.951 "thread": "nvmf_tgt_poll_group_000", 00:16:47.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:47.951 "listen_address": { 00:16:47.951 "trtype": "TCP", 00:16:47.951 "adrfam": "IPv4", 00:16:47.951 "traddr": "10.0.0.2", 00:16:47.951 "trsvcid": "4420" 00:16:47.951 }, 00:16:47.951 "peer_address": { 00:16:47.951 "trtype": "TCP", 00:16:47.951 "adrfam": "IPv4", 00:16:47.951 "traddr": "10.0.0.1", 00:16:47.951 "trsvcid": "47322" 00:16:47.951 }, 00:16:47.951 "auth": { 00:16:47.951 "state": "completed", 00:16:47.951 "digest": "sha256", 00:16:47.951 "dhgroup": "ffdhe6144" 00:16:47.951 } 00:16:47.951 } 00:16:47.951 ]' 00:16:47.951 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.951 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.951 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.951 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.951 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.951 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.951 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.951 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.214 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:48.214 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:49.155 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.155 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.155 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.416 00:16:49.416 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.416 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.416 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.677 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.677 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.677 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.677 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.677 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.677 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.677 { 00:16:49.677 "cntlid": 37, 00:16:49.677 "qid": 0, 00:16:49.677 "state": "enabled", 00:16:49.677 "thread": "nvmf_tgt_poll_group_000", 00:16:49.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:49.677 "listen_address": { 00:16:49.677 "trtype": "TCP", 00:16:49.677 "adrfam": "IPv4", 00:16:49.677 "traddr": "10.0.0.2", 00:16:49.677 "trsvcid": "4420" 00:16:49.677 }, 00:16:49.677 "peer_address": { 00:16:49.677 "trtype": "TCP", 00:16:49.677 "adrfam": "IPv4", 00:16:49.677 "traddr": "10.0.0.1", 00:16:49.677 "trsvcid": "47334" 00:16:49.677 }, 00:16:49.677 "auth": { 00:16:49.677 "state": "completed", 00:16:49.677 "digest": "sha256", 00:16:49.677 "dhgroup": "ffdhe6144" 00:16:49.677 } 00:16:49.677 } 00:16:49.677 ]' 00:16:49.677 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.677 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.677 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.938 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.938 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.938 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.938 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.938 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.938 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:49.939 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:50.881 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.881 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.881 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.881 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.881 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.881 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.881 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.881 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.881 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.455 00:16:51.455 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.455 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.455 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.455 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.455 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.455 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.455 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.456 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.456 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.456 { 00:16:51.456 "cntlid": 39, 00:16:51.456 "qid": 0, 00:16:51.456 "state": "enabled", 00:16:51.456 "thread": "nvmf_tgt_poll_group_000", 00:16:51.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:51.456 "listen_address": { 00:16:51.456 "trtype": "TCP", 00:16:51.456 "adrfam": "IPv4", 00:16:51.456 "traddr": "10.0.0.2", 00:16:51.456 "trsvcid": "4420" 00:16:51.456 }, 00:16:51.456 "peer_address": { 00:16:51.456 "trtype": "TCP", 00:16:51.456 "adrfam": "IPv4", 00:16:51.456 "traddr": "10.0.0.1", 00:16:51.456 "trsvcid": "47356" 00:16:51.456 }, 00:16:51.456 "auth": { 00:16:51.456 "state": "completed", 00:16:51.456 "digest": "sha256", 00:16:51.456 "dhgroup": "ffdhe6144" 00:16:51.456 } 00:16:51.456 } 00:16:51.456 ]' 00:16:51.456 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.456 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.456 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.717 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.717 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.717 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.717 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.717 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.978 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:16:51.978 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:16:52.549 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.549 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:52.549 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.549 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.549 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.550 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.550 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.550 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.550 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.811 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.383 00:16:53.383 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.383 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.383 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.644 { 00:16:53.644 "cntlid": 41, 00:16:53.644 "qid": 0, 00:16:53.644 "state": "enabled", 00:16:53.644 "thread": "nvmf_tgt_poll_group_000", 00:16:53.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:53.644 "listen_address": { 00:16:53.644 "trtype": "TCP", 00:16:53.644 "adrfam": "IPv4", 00:16:53.644 "traddr": "10.0.0.2", 00:16:53.644 "trsvcid": "4420" 00:16:53.644 }, 00:16:53.644 "peer_address": { 00:16:53.644 "trtype": "TCP", 00:16:53.644 "adrfam": "IPv4", 00:16:53.644 "traddr": "10.0.0.1", 00:16:53.644 "trsvcid": "47390" 00:16:53.644 }, 00:16:53.644 "auth": { 00:16:53.644 "state": "completed", 00:16:53.644 "digest": "sha256", 00:16:53.644 "dhgroup": "ffdhe8192" 00:16:53.644 } 00:16:53.644 } 00:16:53.644 ]' 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.644 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.905 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:53.905 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:16:54.477 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.744 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.314 00:16:55.314 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.314 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.314 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.574 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.574 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.575 { 00:16:55.575 "cntlid": 43, 00:16:55.575 "qid": 0, 00:16:55.575 "state": "enabled", 00:16:55.575 "thread": "nvmf_tgt_poll_group_000", 00:16:55.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:55.575 "listen_address": { 00:16:55.575 "trtype": "TCP", 00:16:55.575 "adrfam": "IPv4", 00:16:55.575 "traddr": "10.0.0.2", 00:16:55.575 "trsvcid": "4420" 00:16:55.575 }, 00:16:55.575 "peer_address": { 00:16:55.575 "trtype": "TCP", 00:16:55.575 "adrfam": "IPv4", 00:16:55.575 "traddr": "10.0.0.1", 00:16:55.575 "trsvcid": "47422" 00:16:55.575 }, 00:16:55.575 "auth": { 00:16:55.575 "state": "completed", 00:16:55.575 "digest": "sha256", 00:16:55.575 "dhgroup": "ffdhe8192" 00:16:55.575 } 00:16:55.575 } 00:16:55.575 ]' 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.575 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.835 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:55.835 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.776 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.348 00:16:57.348 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.348 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.348 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.609 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.609 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.609 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.609 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.610 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.610 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.610 { 00:16:57.610 "cntlid": 45, 00:16:57.610 "qid": 0, 00:16:57.610 "state": "enabled", 00:16:57.610 "thread": "nvmf_tgt_poll_group_000", 00:16:57.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:57.610 "listen_address": { 00:16:57.610 "trtype": "TCP", 00:16:57.610 "adrfam": "IPv4", 00:16:57.610 "traddr": "10.0.0.2", 00:16:57.610 "trsvcid": "4420" 00:16:57.610 }, 00:16:57.610 "peer_address": { 00:16:57.610 "trtype": "TCP", 00:16:57.610 "adrfam": "IPv4", 00:16:57.610 "traddr": "10.0.0.1", 00:16:57.610 "trsvcid": "54964" 00:16:57.610 }, 00:16:57.610 "auth": { 00:16:57.610 "state": "completed", 00:16:57.610 "digest": "sha256", 00:16:57.610 "dhgroup": "ffdhe8192" 00:16:57.610 } 00:16:57.610 } 00:16:57.610 ]' 00:16:57.610 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.610 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.610 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.610 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.610 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.610 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.610 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.610 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.871 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:57.871 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.814 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.815 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.815 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.815 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.815 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.387 00:16:59.387 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.387 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.387 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.647 { 00:16:59.647 "cntlid": 47, 00:16:59.647 "qid": 0, 00:16:59.647 "state": "enabled", 00:16:59.647 "thread": "nvmf_tgt_poll_group_000", 00:16:59.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:59.647 "listen_address": { 00:16:59.647 "trtype": "TCP", 00:16:59.647 "adrfam": "IPv4", 00:16:59.647 "traddr": "10.0.0.2", 00:16:59.647 "trsvcid": "4420" 00:16:59.647 }, 00:16:59.647 "peer_address": { 00:16:59.647 "trtype": "TCP", 00:16:59.647 "adrfam": "IPv4", 00:16:59.647 "traddr": "10.0.0.1", 00:16:59.647 "trsvcid": "54994" 00:16:59.647 }, 00:16:59.647 "auth": { 00:16:59.647 "state": "completed", 00:16:59.647 "digest": "sha256", 00:16:59.647 "dhgroup": "ffdhe8192" 00:16:59.647 } 00:16:59.647 } 00:16:59.647 ]' 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.647 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.922 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:16:59.922 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.865 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.866 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.866 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.866 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.866 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.866 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.866 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.866 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.127 00:17:01.127 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.127 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.127 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.387 { 00:17:01.387 "cntlid": 49, 00:17:01.387 "qid": 0, 00:17:01.387 "state": "enabled", 00:17:01.387 "thread": "nvmf_tgt_poll_group_000", 00:17:01.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:01.387 "listen_address": { 00:17:01.387 "trtype": "TCP", 00:17:01.387 "adrfam": "IPv4", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "trsvcid": "4420" 00:17:01.387 }, 00:17:01.387 "peer_address": { 00:17:01.387 "trtype": "TCP", 00:17:01.387 "adrfam": "IPv4", 00:17:01.387 "traddr": "10.0.0.1", 00:17:01.387 "trsvcid": "55016" 00:17:01.387 }, 00:17:01.387 "auth": { 00:17:01.387 "state": "completed", 00:17:01.387 "digest": "sha384", 00:17:01.387 "dhgroup": "null" 00:17:01.387 } 00:17:01.387 } 00:17:01.387 ]' 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.387 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.647 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:01.647 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:02.251 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.251 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.251 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.251 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.251 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.511 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.770 00:17:02.770 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.770 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.770 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.030 { 00:17:03.030 "cntlid": 51, 00:17:03.030 "qid": 0, 00:17:03.030 "state": "enabled", 00:17:03.030 "thread": "nvmf_tgt_poll_group_000", 00:17:03.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:03.030 "listen_address": { 00:17:03.030 "trtype": "TCP", 00:17:03.030 "adrfam": "IPv4", 00:17:03.030 "traddr": "10.0.0.2", 00:17:03.030 "trsvcid": "4420" 00:17:03.030 }, 00:17:03.030 "peer_address": { 00:17:03.030 "trtype": "TCP", 00:17:03.030 "adrfam": "IPv4", 00:17:03.030 "traddr": "10.0.0.1", 00:17:03.030 "trsvcid": "55038" 00:17:03.030 }, 00:17:03.030 "auth": { 00:17:03.030 "state": "completed", 00:17:03.030 "digest": "sha384", 00:17:03.030 "dhgroup": "null" 00:17:03.030 } 00:17:03.030 } 00:17:03.030 ]' 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.030 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.289 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:03.289 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.231 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.492 00:17:04.492 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.492 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.492 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.753 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.753 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.754 { 00:17:04.754 "cntlid": 53, 00:17:04.754 "qid": 0, 00:17:04.754 "state": "enabled", 00:17:04.754 "thread": "nvmf_tgt_poll_group_000", 00:17:04.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:04.754 "listen_address": { 00:17:04.754 "trtype": "TCP", 00:17:04.754 "adrfam": "IPv4", 00:17:04.754 "traddr": "10.0.0.2", 00:17:04.754 "trsvcid": "4420" 00:17:04.754 }, 00:17:04.754 "peer_address": { 00:17:04.754 "trtype": "TCP", 00:17:04.754 "adrfam": "IPv4", 00:17:04.754 "traddr": "10.0.0.1", 00:17:04.754 "trsvcid": "55060" 00:17:04.754 }, 00:17:04.754 "auth": { 00:17:04.754 "state": "completed", 00:17:04.754 "digest": "sha384", 00:17:04.754 "dhgroup": "null" 00:17:04.754 } 00:17:04.754 } 00:17:04.754 ]' 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.754 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.014 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:05.014 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:05.957 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.957 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:05.957 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.957 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.957 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.957 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.957 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.957 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.957 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.218 00:17:06.218 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.218 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.219 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.480 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.480 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.480 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.480 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.480 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.480 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.480 { 00:17:06.480 "cntlid": 55, 00:17:06.480 "qid": 0, 00:17:06.480 "state": "enabled", 00:17:06.480 "thread": "nvmf_tgt_poll_group_000", 00:17:06.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:06.480 "listen_address": { 00:17:06.480 "trtype": "TCP", 00:17:06.480 "adrfam": "IPv4", 00:17:06.480 "traddr": "10.0.0.2", 00:17:06.480 "trsvcid": "4420" 00:17:06.480 }, 00:17:06.480 "peer_address": { 00:17:06.480 "trtype": "TCP", 00:17:06.480 "adrfam": "IPv4", 00:17:06.480 "traddr": "10.0.0.1", 00:17:06.480 "trsvcid": "36598" 00:17:06.480 }, 00:17:06.480 "auth": { 00:17:06.480 "state": "completed", 00:17:06.480 "digest": "sha384", 00:17:06.480 "dhgroup": "null" 00:17:06.480 } 00:17:06.480 } 00:17:06.480 ]' 00:17:06.481 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.481 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.481 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.481 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.481 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.481 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.481 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.481 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.742 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:06.742 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.685 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.686 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.686 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.686 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.686 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.686 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.686 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.686 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.947 00:17:07.947 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.947 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.947 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.208 { 00:17:08.208 "cntlid": 57, 00:17:08.208 "qid": 0, 00:17:08.208 "state": "enabled", 00:17:08.208 "thread": "nvmf_tgt_poll_group_000", 00:17:08.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:08.208 "listen_address": { 00:17:08.208 "trtype": "TCP", 00:17:08.208 "adrfam": "IPv4", 00:17:08.208 "traddr": "10.0.0.2", 00:17:08.208 "trsvcid": "4420" 00:17:08.208 }, 00:17:08.208 "peer_address": { 00:17:08.208 "trtype": "TCP", 00:17:08.208 "adrfam": "IPv4", 00:17:08.208 "traddr": "10.0.0.1", 00:17:08.208 "trsvcid": "36624" 00:17:08.208 }, 00:17:08.208 "auth": { 00:17:08.208 "state": "completed", 00:17:08.208 "digest": "sha384", 00:17:08.208 "dhgroup": "ffdhe2048" 00:17:08.208 } 00:17:08.208 } 00:17:08.208 ]' 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.208 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.469 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:08.469 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.412 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.673 00:17:09.673 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.673 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.673 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.934 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.934 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.934 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.934 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.934 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.934 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.934 { 00:17:09.934 "cntlid": 59, 00:17:09.934 "qid": 0, 00:17:09.934 "state": "enabled", 00:17:09.934 "thread": "nvmf_tgt_poll_group_000", 00:17:09.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:09.934 "listen_address": { 00:17:09.934 "trtype": "TCP", 00:17:09.934 "adrfam": "IPv4", 00:17:09.934 "traddr": "10.0.0.2", 00:17:09.934 "trsvcid": "4420" 00:17:09.934 }, 00:17:09.934 "peer_address": { 00:17:09.934 "trtype": "TCP", 00:17:09.934 "adrfam": "IPv4", 00:17:09.934 "traddr": "10.0.0.1", 00:17:09.934 "trsvcid": "36638" 00:17:09.934 }, 00:17:09.934 "auth": { 00:17:09.934 "state": "completed", 00:17:09.934 "digest": "sha384", 00:17:09.934 "dhgroup": "ffdhe2048" 00:17:09.934 } 00:17:09.934 } 00:17:09.934 ]' 00:17:09.934 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.934 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.934 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.934 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.934 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.934 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.934 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.934 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.196 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:10.196 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:10.769 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.031 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:11.031 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.031 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.031 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.031 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.031 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.031 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.031 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.292 00:17:11.292 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.292 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.292 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.552 { 00:17:11.552 "cntlid": 61, 00:17:11.552 "qid": 0, 00:17:11.552 "state": "enabled", 00:17:11.552 "thread": "nvmf_tgt_poll_group_000", 00:17:11.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:11.552 "listen_address": { 00:17:11.552 "trtype": "TCP", 00:17:11.552 "adrfam": "IPv4", 00:17:11.552 "traddr": "10.0.0.2", 00:17:11.552 "trsvcid": "4420" 00:17:11.552 }, 00:17:11.552 "peer_address": { 00:17:11.552 "trtype": "TCP", 00:17:11.552 "adrfam": "IPv4", 00:17:11.552 "traddr": "10.0.0.1", 00:17:11.552 "trsvcid": "36678" 00:17:11.552 }, 00:17:11.552 "auth": { 00:17:11.552 "state": "completed", 00:17:11.552 "digest": "sha384", 00:17:11.552 "dhgroup": "ffdhe2048" 00:17:11.552 } 00:17:11.552 } 00:17:11.552 ]' 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.552 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.812 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:11.812 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.751 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.012 00:17:13.012 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.012 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.012 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.273 { 00:17:13.273 "cntlid": 63, 00:17:13.273 "qid": 0, 00:17:13.273 "state": "enabled", 00:17:13.273 "thread": "nvmf_tgt_poll_group_000", 00:17:13.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:13.273 "listen_address": { 00:17:13.273 "trtype": "TCP", 00:17:13.273 "adrfam": "IPv4", 00:17:13.273 "traddr": "10.0.0.2", 00:17:13.273 "trsvcid": "4420" 00:17:13.273 }, 00:17:13.273 "peer_address": { 00:17:13.273 "trtype": "TCP", 00:17:13.273 "adrfam": "IPv4", 00:17:13.273 "traddr": "10.0.0.1", 00:17:13.273 "trsvcid": "36710" 00:17:13.273 }, 00:17:13.273 "auth": { 00:17:13.273 "state": "completed", 00:17:13.273 "digest": "sha384", 00:17:13.273 "dhgroup": "ffdhe2048" 00:17:13.273 } 00:17:13.273 } 00:17:13.273 ]' 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.273 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.534 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:13.534 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.476 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.737 00:17:14.737 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.737 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.737 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.997 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.997 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.997 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.997 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.997 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.997 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.997 { 00:17:14.997 "cntlid": 65, 00:17:14.997 "qid": 0, 00:17:14.997 "state": "enabled", 00:17:14.997 "thread": "nvmf_tgt_poll_group_000", 00:17:14.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:14.997 "listen_address": { 00:17:14.997 "trtype": "TCP", 00:17:14.997 "adrfam": "IPv4", 00:17:14.997 "traddr": "10.0.0.2", 00:17:14.997 "trsvcid": "4420" 00:17:14.997 }, 00:17:14.997 "peer_address": { 00:17:14.997 "trtype": "TCP", 00:17:14.997 "adrfam": "IPv4", 00:17:14.997 "traddr": "10.0.0.1", 00:17:14.997 "trsvcid": "36740" 00:17:14.997 }, 00:17:14.997 "auth": { 00:17:14.997 "state": "completed", 00:17:14.997 "digest": "sha384", 00:17:14.997 "dhgroup": "ffdhe3072" 00:17:14.997 } 00:17:14.997 } 00:17:14.997 ]' 00:17:14.997 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.997 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.997 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.997 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.997 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.997 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.997 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.997 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.259 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:15.259 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.203 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.464 00:17:16.464 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.464 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.464 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.725 { 00:17:16.725 "cntlid": 67, 00:17:16.725 "qid": 0, 00:17:16.725 "state": "enabled", 00:17:16.725 "thread": "nvmf_tgt_poll_group_000", 00:17:16.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:16.725 "listen_address": { 00:17:16.725 "trtype": "TCP", 00:17:16.725 "adrfam": "IPv4", 00:17:16.725 "traddr": "10.0.0.2", 00:17:16.725 "trsvcid": "4420" 00:17:16.725 }, 00:17:16.725 "peer_address": { 00:17:16.725 "trtype": "TCP", 00:17:16.725 "adrfam": "IPv4", 00:17:16.725 "traddr": "10.0.0.1", 00:17:16.725 "trsvcid": "35400" 00:17:16.725 }, 00:17:16.725 "auth": { 00:17:16.725 "state": "completed", 00:17:16.725 "digest": "sha384", 00:17:16.725 "dhgroup": "ffdhe3072" 00:17:16.725 } 00:17:16.725 } 00:17:16.725 ]' 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.725 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.986 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:16.986 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:17.930 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.931 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.192 00:17:18.192 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.192 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.192 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.453 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.453 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.453 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.453 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.453 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.453 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.453 { 00:17:18.453 "cntlid": 69, 00:17:18.453 "qid": 0, 00:17:18.453 "state": "enabled", 00:17:18.453 "thread": "nvmf_tgt_poll_group_000", 00:17:18.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:18.453 "listen_address": { 00:17:18.453 "trtype": "TCP", 00:17:18.454 "adrfam": "IPv4", 00:17:18.454 "traddr": "10.0.0.2", 00:17:18.454 "trsvcid": "4420" 00:17:18.454 }, 00:17:18.454 "peer_address": { 00:17:18.454 "trtype": "TCP", 00:17:18.454 "adrfam": "IPv4", 00:17:18.454 "traddr": "10.0.0.1", 00:17:18.454 "trsvcid": "35422" 00:17:18.454 }, 00:17:18.454 "auth": { 00:17:18.454 "state": "completed", 00:17:18.454 "digest": "sha384", 00:17:18.454 "dhgroup": "ffdhe3072" 00:17:18.454 } 00:17:18.454 } 00:17:18.454 ]' 00:17:18.454 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.454 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.454 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.454 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.454 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.454 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.454 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.454 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.716 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:18.716 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.660 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.921 00:17:19.921 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.921 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.921 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.183 { 00:17:20.183 "cntlid": 71, 00:17:20.183 "qid": 0, 00:17:20.183 "state": "enabled", 00:17:20.183 "thread": "nvmf_tgt_poll_group_000", 00:17:20.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:20.183 "listen_address": { 00:17:20.183 "trtype": "TCP", 00:17:20.183 "adrfam": "IPv4", 00:17:20.183 "traddr": "10.0.0.2", 00:17:20.183 "trsvcid": "4420" 00:17:20.183 }, 00:17:20.183 "peer_address": { 00:17:20.183 "trtype": "TCP", 00:17:20.183 "adrfam": "IPv4", 00:17:20.183 "traddr": "10.0.0.1", 00:17:20.183 "trsvcid": "35448" 00:17:20.183 }, 00:17:20.183 "auth": { 00:17:20.183 "state": "completed", 00:17:20.183 "digest": "sha384", 00:17:20.183 "dhgroup": "ffdhe3072" 00:17:20.183 } 00:17:20.183 } 00:17:20.183 ]' 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.183 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.444 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:20.444 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:21.017 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.017 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.017 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.017 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.278 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.279 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.540 00:17:21.540 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.540 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.540 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.801 { 00:17:21.801 "cntlid": 73, 00:17:21.801 "qid": 0, 00:17:21.801 "state": "enabled", 00:17:21.801 "thread": "nvmf_tgt_poll_group_000", 00:17:21.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:21.801 "listen_address": { 00:17:21.801 "trtype": "TCP", 00:17:21.801 "adrfam": "IPv4", 00:17:21.801 "traddr": "10.0.0.2", 00:17:21.801 "trsvcid": "4420" 00:17:21.801 }, 00:17:21.801 "peer_address": { 00:17:21.801 "trtype": "TCP", 00:17:21.801 "adrfam": "IPv4", 00:17:21.801 "traddr": "10.0.0.1", 00:17:21.801 "trsvcid": "35480" 00:17:21.801 }, 00:17:21.801 "auth": { 00:17:21.801 "state": "completed", 00:17:21.801 "digest": "sha384", 00:17:21.801 "dhgroup": "ffdhe4096" 00:17:21.801 } 00:17:21.801 } 00:17:21.801 ]' 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.801 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.061 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:22.061 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:23.005 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.005 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.005 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.005 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.005 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.005 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.005 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.005 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.005 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.266 00:17:23.266 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.266 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.266 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.582 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.582 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.582 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.582 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.582 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.582 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.582 { 00:17:23.582 "cntlid": 75, 00:17:23.582 "qid": 0, 00:17:23.582 "state": "enabled", 00:17:23.582 "thread": "nvmf_tgt_poll_group_000", 00:17:23.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:23.582 "listen_address": { 00:17:23.582 "trtype": "TCP", 00:17:23.582 "adrfam": "IPv4", 00:17:23.582 "traddr": "10.0.0.2", 00:17:23.582 "trsvcid": "4420" 00:17:23.582 }, 00:17:23.582 "peer_address": { 00:17:23.582 "trtype": "TCP", 00:17:23.582 "adrfam": "IPv4", 00:17:23.582 "traddr": "10.0.0.1", 00:17:23.582 "trsvcid": "35502" 00:17:23.582 }, 00:17:23.582 "auth": { 00:17:23.582 "state": "completed", 00:17:23.582 "digest": "sha384", 00:17:23.582 "dhgroup": "ffdhe4096" 00:17:23.582 } 00:17:23.582 } 00:17:23.582 ]' 00:17:23.582 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.582 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.582 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.582 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.582 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.930 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.931 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.931 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.931 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:23.931 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:24.582 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.582 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:24.582 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.582 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.582 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.842 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.101 00:17:25.101 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.101 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.101 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.361 { 00:17:25.361 "cntlid": 77, 00:17:25.361 "qid": 0, 00:17:25.361 "state": "enabled", 00:17:25.361 "thread": "nvmf_tgt_poll_group_000", 00:17:25.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:25.361 "listen_address": { 00:17:25.361 "trtype": "TCP", 00:17:25.361 "adrfam": "IPv4", 00:17:25.361 "traddr": "10.0.0.2", 00:17:25.361 "trsvcid": "4420" 00:17:25.361 }, 00:17:25.361 "peer_address": { 00:17:25.361 "trtype": "TCP", 00:17:25.361 "adrfam": "IPv4", 00:17:25.361 "traddr": "10.0.0.1", 00:17:25.361 "trsvcid": "35526" 00:17:25.361 }, 00:17:25.361 "auth": { 00:17:25.361 "state": "completed", 00:17:25.361 "digest": "sha384", 00:17:25.361 "dhgroup": "ffdhe4096" 00:17:25.361 } 00:17:25.361 } 00:17:25.361 ]' 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.361 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:25.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.621 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.881 00:17:26.881 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.881 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.881 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.140 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.140 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.140 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.140 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.140 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.140 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.140 { 00:17:27.140 "cntlid": 79, 00:17:27.140 "qid": 0, 00:17:27.140 "state": "enabled", 00:17:27.140 "thread": "nvmf_tgt_poll_group_000", 00:17:27.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:27.140 "listen_address": { 00:17:27.140 "trtype": "TCP", 00:17:27.140 "adrfam": "IPv4", 00:17:27.140 "traddr": "10.0.0.2", 00:17:27.140 "trsvcid": "4420" 00:17:27.140 }, 00:17:27.140 "peer_address": { 00:17:27.140 "trtype": "TCP", 00:17:27.141 "adrfam": "IPv4", 00:17:27.141 "traddr": "10.0.0.1", 00:17:27.141 "trsvcid": "46864" 00:17:27.141 }, 00:17:27.141 "auth": { 00:17:27.141 "state": "completed", 00:17:27.141 "digest": "sha384", 00:17:27.141 "dhgroup": "ffdhe4096" 00:17:27.141 } 00:17:27.141 } 00:17:27.141 ]' 00:17:27.141 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.141 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.141 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.141 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.141 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.141 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.141 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.141 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.400 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:27.401 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:28.359 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.360 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.621 00:17:28.621 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.621 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.621 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.882 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.882 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.882 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.882 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.882 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.882 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.882 { 00:17:28.882 "cntlid": 81, 00:17:28.882 "qid": 0, 00:17:28.882 "state": "enabled", 00:17:28.882 "thread": "nvmf_tgt_poll_group_000", 00:17:28.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:28.882 "listen_address": { 00:17:28.882 "trtype": "TCP", 00:17:28.882 "adrfam": "IPv4", 00:17:28.882 "traddr": "10.0.0.2", 00:17:28.882 "trsvcid": "4420" 00:17:28.882 }, 00:17:28.882 "peer_address": { 00:17:28.882 "trtype": "TCP", 00:17:28.882 "adrfam": "IPv4", 00:17:28.882 "traddr": "10.0.0.1", 00:17:28.882 "trsvcid": "46892" 00:17:28.882 }, 00:17:28.882 "auth": { 00:17:28.882 "state": "completed", 00:17:28.882 "digest": "sha384", 00:17:28.882 "dhgroup": "ffdhe6144" 00:17:28.882 } 00:17:28.882 } 00:17:28.882 ]' 00:17:28.882 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.883 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.883 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.883 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.883 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.143 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.143 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.143 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.143 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:29.143 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.086 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.087 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.087 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.087 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.087 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.087 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.087 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.659 00:17:30.659 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.659 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.659 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.659 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.659 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.659 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.659 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.659 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.659 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.659 { 00:17:30.659 "cntlid": 83, 00:17:30.659 "qid": 0, 00:17:30.659 "state": "enabled", 00:17:30.659 "thread": "nvmf_tgt_poll_group_000", 00:17:30.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:30.659 "listen_address": { 00:17:30.659 "trtype": "TCP", 00:17:30.659 "adrfam": "IPv4", 00:17:30.659 "traddr": "10.0.0.2", 00:17:30.659 "trsvcid": "4420" 00:17:30.659 }, 00:17:30.659 "peer_address": { 00:17:30.659 "trtype": "TCP", 00:17:30.659 "adrfam": "IPv4", 00:17:30.659 "traddr": "10.0.0.1", 00:17:30.659 "trsvcid": "46914" 00:17:30.659 }, 00:17:30.659 "auth": { 00:17:30.659 "state": "completed", 00:17:30.659 "digest": "sha384", 00:17:30.659 "dhgroup": "ffdhe6144" 00:17:30.659 } 00:17:30.659 } 00:17:30.659 ]' 00:17:30.660 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.660 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.660 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.920 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.920 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.920 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.920 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.920 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.920 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:30.920 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:31.863 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.863 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.863 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.863 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.863 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.863 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.863 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.863 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.124 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.386 00:17:32.386 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.386 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.386 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.647 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.647 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.647 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.647 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.647 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.647 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.647 { 00:17:32.647 "cntlid": 85, 00:17:32.647 "qid": 0, 00:17:32.647 "state": "enabled", 00:17:32.647 "thread": "nvmf_tgt_poll_group_000", 00:17:32.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:32.648 "listen_address": { 00:17:32.648 "trtype": "TCP", 00:17:32.648 "adrfam": "IPv4", 00:17:32.648 "traddr": "10.0.0.2", 00:17:32.648 "trsvcid": "4420" 00:17:32.648 }, 00:17:32.648 "peer_address": { 00:17:32.648 "trtype": "TCP", 00:17:32.648 "adrfam": "IPv4", 00:17:32.648 "traddr": "10.0.0.1", 00:17:32.648 "trsvcid": "46932" 00:17:32.648 }, 00:17:32.648 "auth": { 00:17:32.648 "state": "completed", 00:17:32.648 "digest": "sha384", 00:17:32.648 "dhgroup": "ffdhe6144" 00:17:32.648 } 00:17:32.648 } 00:17:32.648 ]' 00:17:32.648 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.648 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.648 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.648 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.648 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.648 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.648 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.648 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.909 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:32.909 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:33.490 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.751 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.333 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.333 { 00:17:34.333 "cntlid": 87, 00:17:34.333 "qid": 0, 00:17:34.333 "state": "enabled", 00:17:34.333 "thread": "nvmf_tgt_poll_group_000", 00:17:34.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:34.333 "listen_address": { 00:17:34.333 "trtype": "TCP", 00:17:34.333 "adrfam": "IPv4", 00:17:34.333 "traddr": "10.0.0.2", 00:17:34.333 "trsvcid": "4420" 00:17:34.333 }, 00:17:34.333 "peer_address": { 00:17:34.333 "trtype": "TCP", 00:17:34.333 "adrfam": "IPv4", 00:17:34.333 "traddr": "10.0.0.1", 00:17:34.333 "trsvcid": "46966" 00:17:34.333 }, 00:17:34.333 "auth": { 00:17:34.333 "state": "completed", 00:17:34.333 "digest": "sha384", 00:17:34.333 "dhgroup": "ffdhe6144" 00:17:34.333 } 00:17:34.333 } 00:17:34.333 ]' 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.333 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.594 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.594 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.594 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.594 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.594 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.594 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:34.594 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.537 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.109 00:17:36.109 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.109 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.109 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.369 { 00:17:36.369 "cntlid": 89, 00:17:36.369 "qid": 0, 00:17:36.369 "state": "enabled", 00:17:36.369 "thread": "nvmf_tgt_poll_group_000", 00:17:36.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:36.369 "listen_address": { 00:17:36.369 "trtype": "TCP", 00:17:36.369 "adrfam": "IPv4", 00:17:36.369 "traddr": "10.0.0.2", 00:17:36.369 "trsvcid": "4420" 00:17:36.369 }, 00:17:36.369 "peer_address": { 00:17:36.369 "trtype": "TCP", 00:17:36.369 "adrfam": "IPv4", 00:17:36.369 "traddr": "10.0.0.1", 00:17:36.369 "trsvcid": "36890" 00:17:36.369 }, 00:17:36.369 "auth": { 00:17:36.369 "state": "completed", 00:17:36.369 "digest": "sha384", 00:17:36.369 "dhgroup": "ffdhe8192" 00:17:36.369 } 00:17:36.369 } 00:17:36.369 ]' 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.369 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.630 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:36.630 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.572 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.144 00:17:38.144 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.144 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.144 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.405 { 00:17:38.405 "cntlid": 91, 00:17:38.405 "qid": 0, 00:17:38.405 "state": "enabled", 00:17:38.405 "thread": "nvmf_tgt_poll_group_000", 00:17:38.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:38.405 "listen_address": { 00:17:38.405 "trtype": "TCP", 00:17:38.405 "adrfam": "IPv4", 00:17:38.405 "traddr": "10.0.0.2", 00:17:38.405 "trsvcid": "4420" 00:17:38.405 }, 00:17:38.405 "peer_address": { 00:17:38.405 "trtype": "TCP", 00:17:38.405 "adrfam": "IPv4", 00:17:38.405 "traddr": "10.0.0.1", 00:17:38.405 "trsvcid": "36928" 00:17:38.405 }, 00:17:38.405 "auth": { 00:17:38.405 "state": "completed", 00:17:38.405 "digest": "sha384", 00:17:38.405 "dhgroup": "ffdhe8192" 00:17:38.405 } 00:17:38.405 } 00:17:38.405 ]' 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.405 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.406 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.406 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.666 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:38.666 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.606 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.177 00:17:40.177 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.177 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.177 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.436 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.436 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.437 { 00:17:40.437 "cntlid": 93, 00:17:40.437 "qid": 0, 00:17:40.437 "state": "enabled", 00:17:40.437 "thread": "nvmf_tgt_poll_group_000", 00:17:40.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:40.437 "listen_address": { 00:17:40.437 "trtype": "TCP", 00:17:40.437 "adrfam": "IPv4", 00:17:40.437 "traddr": "10.0.0.2", 00:17:40.437 "trsvcid": "4420" 00:17:40.437 }, 00:17:40.437 "peer_address": { 00:17:40.437 "trtype": "TCP", 00:17:40.437 "adrfam": "IPv4", 00:17:40.437 "traddr": "10.0.0.1", 00:17:40.437 "trsvcid": "36958" 00:17:40.437 }, 00:17:40.437 "auth": { 00:17:40.437 "state": "completed", 00:17:40.437 "digest": "sha384", 00:17:40.437 "dhgroup": "ffdhe8192" 00:17:40.437 } 00:17:40.437 } 00:17:40.437 ]' 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.437 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.697 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:40.697 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.640 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.212 00:17:42.212 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.212 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.212 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.212 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.212 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.212 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.212 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.473 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.473 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.473 { 00:17:42.473 "cntlid": 95, 00:17:42.473 "qid": 0, 00:17:42.473 "state": "enabled", 00:17:42.473 "thread": "nvmf_tgt_poll_group_000", 00:17:42.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:42.473 "listen_address": { 00:17:42.473 "trtype": "TCP", 00:17:42.473 "adrfam": "IPv4", 00:17:42.473 "traddr": "10.0.0.2", 00:17:42.473 "trsvcid": "4420" 00:17:42.473 }, 00:17:42.473 "peer_address": { 00:17:42.473 "trtype": "TCP", 00:17:42.473 "adrfam": "IPv4", 00:17:42.473 "traddr": "10.0.0.1", 00:17:42.473 "trsvcid": "36980" 00:17:42.473 }, 00:17:42.473 "auth": { 00:17:42.473 "state": "completed", 00:17:42.473 "digest": "sha384", 00:17:42.473 "dhgroup": "ffdhe8192" 00:17:42.473 } 00:17:42.473 } 00:17:42.473 ]' 00:17:42.473 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.473 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.473 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.473 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.473 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.473 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.473 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.473 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.735 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:42.735 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:43.306 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.306 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.306 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.306 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.306 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.306 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:43.306 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.306 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.306 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.306 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.568 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.569 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.830 00:17:43.830 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.830 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.830 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.090 { 00:17:44.090 "cntlid": 97, 00:17:44.090 "qid": 0, 00:17:44.090 "state": "enabled", 00:17:44.090 "thread": "nvmf_tgt_poll_group_000", 00:17:44.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:44.090 "listen_address": { 00:17:44.090 "trtype": "TCP", 00:17:44.090 "adrfam": "IPv4", 00:17:44.090 "traddr": "10.0.0.2", 00:17:44.090 "trsvcid": "4420" 00:17:44.090 }, 00:17:44.090 "peer_address": { 00:17:44.090 "trtype": "TCP", 00:17:44.090 "adrfam": "IPv4", 00:17:44.090 "traddr": "10.0.0.1", 00:17:44.090 "trsvcid": "37014" 00:17:44.090 }, 00:17:44.090 "auth": { 00:17:44.090 "state": "completed", 00:17:44.090 "digest": "sha512", 00:17:44.090 "dhgroup": "null" 00:17:44.090 } 00:17:44.090 } 00:17:44.090 ]' 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.090 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.352 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:44.352 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:45.296 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.296 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:45.296 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.296 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.296 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.296 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.296 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.297 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.558 00:17:45.558 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.558 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.558 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.820 { 00:17:45.820 "cntlid": 99, 00:17:45.820 "qid": 0, 00:17:45.820 "state": "enabled", 00:17:45.820 "thread": "nvmf_tgt_poll_group_000", 00:17:45.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:45.820 "listen_address": { 00:17:45.820 "trtype": "TCP", 00:17:45.820 "adrfam": "IPv4", 00:17:45.820 "traddr": "10.0.0.2", 00:17:45.820 "trsvcid": "4420" 00:17:45.820 }, 00:17:45.820 "peer_address": { 00:17:45.820 "trtype": "TCP", 00:17:45.820 "adrfam": "IPv4", 00:17:45.820 "traddr": "10.0.0.1", 00:17:45.820 "trsvcid": "37054" 00:17:45.820 }, 00:17:45.820 "auth": { 00:17:45.820 "state": "completed", 00:17:45.820 "digest": "sha512", 00:17:45.820 "dhgroup": "null" 00:17:45.820 } 00:17:45.820 } 00:17:45.820 ]' 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.820 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.081 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:46.081 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:47.024 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.024 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:47.024 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.024 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.024 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.024 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.024 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.024 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.024 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.285 00:17:47.285 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.285 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.285 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.548 { 00:17:47.548 "cntlid": 101, 00:17:47.548 "qid": 0, 00:17:47.548 "state": "enabled", 00:17:47.548 "thread": "nvmf_tgt_poll_group_000", 00:17:47.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:47.548 "listen_address": { 00:17:47.548 "trtype": "TCP", 00:17:47.548 "adrfam": "IPv4", 00:17:47.548 "traddr": "10.0.0.2", 00:17:47.548 "trsvcid": "4420" 00:17:47.548 }, 00:17:47.548 "peer_address": { 00:17:47.548 "trtype": "TCP", 00:17:47.548 "adrfam": "IPv4", 00:17:47.548 "traddr": "10.0.0.1", 00:17:47.548 "trsvcid": "41344" 00:17:47.548 }, 00:17:47.548 "auth": { 00:17:47.548 "state": "completed", 00:17:47.548 "digest": "sha512", 00:17:47.548 "dhgroup": "null" 00:17:47.548 } 00:17:47.548 } 00:17:47.548 ]' 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.548 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.808 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:47.808 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:48.378 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.378 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:48.378 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.378 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.378 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.378 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.378 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.378 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.639 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.898 00:17:48.898 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.898 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.898 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.158 { 00:17:49.158 "cntlid": 103, 00:17:49.158 "qid": 0, 00:17:49.158 "state": "enabled", 00:17:49.158 "thread": "nvmf_tgt_poll_group_000", 00:17:49.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:49.158 "listen_address": { 00:17:49.158 "trtype": "TCP", 00:17:49.158 "adrfam": "IPv4", 00:17:49.158 "traddr": "10.0.0.2", 00:17:49.158 "trsvcid": "4420" 00:17:49.158 }, 00:17:49.158 "peer_address": { 00:17:49.158 "trtype": "TCP", 00:17:49.158 "adrfam": "IPv4", 00:17:49.158 "traddr": "10.0.0.1", 00:17:49.158 "trsvcid": "41366" 00:17:49.158 }, 00:17:49.158 "auth": { 00:17:49.158 "state": "completed", 00:17:49.158 "digest": "sha512", 00:17:49.158 "dhgroup": "null" 00:17:49.158 } 00:17:49.158 } 00:17:49.158 ]' 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.158 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.417 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:49.417 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.368 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.628 00:17:50.628 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.628 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.628 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.889 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.889 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.889 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.889 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.889 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.889 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.889 { 00:17:50.889 "cntlid": 105, 00:17:50.889 "qid": 0, 00:17:50.889 "state": "enabled", 00:17:50.889 "thread": "nvmf_tgt_poll_group_000", 00:17:50.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:50.889 "listen_address": { 00:17:50.889 "trtype": "TCP", 00:17:50.889 "adrfam": "IPv4", 00:17:50.889 "traddr": "10.0.0.2", 00:17:50.889 "trsvcid": "4420" 00:17:50.889 }, 00:17:50.890 "peer_address": { 00:17:50.890 "trtype": "TCP", 00:17:50.890 "adrfam": "IPv4", 00:17:50.890 "traddr": "10.0.0.1", 00:17:50.890 "trsvcid": "41394" 00:17:50.890 }, 00:17:50.890 "auth": { 00:17:50.890 "state": "completed", 00:17:50.890 "digest": "sha512", 00:17:50.890 "dhgroup": "ffdhe2048" 00:17:50.890 } 00:17:50.890 } 00:17:50.890 ]' 00:17:50.890 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.890 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.890 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.890 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.890 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.890 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.890 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.890 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.150 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:51.150 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:52.090 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.090 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:52.090 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.090 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.090 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.090 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.090 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:52.090 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.090 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.351 00:17:52.351 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.351 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.351 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.611 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.612 { 00:17:52.612 "cntlid": 107, 00:17:52.612 "qid": 0, 00:17:52.612 "state": "enabled", 00:17:52.612 "thread": "nvmf_tgt_poll_group_000", 00:17:52.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:52.612 "listen_address": { 00:17:52.612 "trtype": "TCP", 00:17:52.612 "adrfam": "IPv4", 00:17:52.612 "traddr": "10.0.0.2", 00:17:52.612 "trsvcid": "4420" 00:17:52.612 }, 00:17:52.612 "peer_address": { 00:17:52.612 "trtype": "TCP", 00:17:52.612 "adrfam": "IPv4", 00:17:52.612 "traddr": "10.0.0.1", 00:17:52.612 "trsvcid": "41418" 00:17:52.612 }, 00:17:52.612 "auth": { 00:17:52.612 "state": "completed", 00:17:52.612 "digest": "sha512", 00:17:52.612 "dhgroup": "ffdhe2048" 00:17:52.612 } 00:17:52.612 } 00:17:52.612 ]' 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.612 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.873 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:52.873 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:53.444 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.704 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.965 00:17:53.966 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.966 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.966 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.227 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.227 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.227 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.227 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.227 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.227 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.227 { 00:17:54.227 "cntlid": 109, 00:17:54.227 "qid": 0, 00:17:54.227 "state": "enabled", 00:17:54.227 "thread": "nvmf_tgt_poll_group_000", 00:17:54.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:54.227 "listen_address": { 00:17:54.227 "trtype": "TCP", 00:17:54.227 "adrfam": "IPv4", 00:17:54.227 "traddr": "10.0.0.2", 00:17:54.227 "trsvcid": "4420" 00:17:54.227 }, 00:17:54.227 "peer_address": { 00:17:54.227 "trtype": "TCP", 00:17:54.227 "adrfam": "IPv4", 00:17:54.227 "traddr": "10.0.0.1", 00:17:54.227 "trsvcid": "41428" 00:17:54.227 }, 00:17:54.227 "auth": { 00:17:54.227 "state": "completed", 00:17:54.227 "digest": "sha512", 00:17:54.227 "dhgroup": "ffdhe2048" 00:17:54.227 } 00:17:54.227 } 00:17:54.227 ]' 00:17:54.227 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.227 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.227 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.227 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.227 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.489 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.489 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.489 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.489 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:54.489 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:17:55.433 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.434 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.695 00:17:55.695 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.695 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.695 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.960 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.960 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.960 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.960 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.960 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.960 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.960 { 00:17:55.960 "cntlid": 111, 00:17:55.960 "qid": 0, 00:17:55.960 "state": "enabled", 00:17:55.960 "thread": "nvmf_tgt_poll_group_000", 00:17:55.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:55.960 "listen_address": { 00:17:55.960 "trtype": "TCP", 00:17:55.960 "adrfam": "IPv4", 00:17:55.960 "traddr": "10.0.0.2", 00:17:55.960 "trsvcid": "4420" 00:17:55.960 }, 00:17:55.960 "peer_address": { 00:17:55.960 "trtype": "TCP", 00:17:55.960 "adrfam": "IPv4", 00:17:55.960 "traddr": "10.0.0.1", 00:17:55.960 "trsvcid": "46688" 00:17:55.960 }, 00:17:55.960 "auth": { 00:17:55.960 "state": "completed", 00:17:55.960 "digest": "sha512", 00:17:55.960 "dhgroup": "ffdhe2048" 00:17:55.960 } 00:17:55.960 } 00:17:55.960 ]' 00:17:55.960 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.960 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.960 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.960 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.960 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.220 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.220 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.221 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.221 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:56.221 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.162 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.424 00:17:57.424 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.424 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.424 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.685 { 00:17:57.685 "cntlid": 113, 00:17:57.685 "qid": 0, 00:17:57.685 "state": "enabled", 00:17:57.685 "thread": "nvmf_tgt_poll_group_000", 00:17:57.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:57.685 "listen_address": { 00:17:57.685 "trtype": "TCP", 00:17:57.685 "adrfam": "IPv4", 00:17:57.685 "traddr": "10.0.0.2", 00:17:57.685 "trsvcid": "4420" 00:17:57.685 }, 00:17:57.685 "peer_address": { 00:17:57.685 "trtype": "TCP", 00:17:57.685 "adrfam": "IPv4", 00:17:57.685 "traddr": "10.0.0.1", 00:17:57.685 "trsvcid": "46704" 00:17:57.685 }, 00:17:57.685 "auth": { 00:17:57.685 "state": "completed", 00:17:57.685 "digest": "sha512", 00:17:57.685 "dhgroup": "ffdhe3072" 00:17:57.685 } 00:17:57.685 } 00:17:57.685 ]' 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.685 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.945 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:57.945 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.890 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.151 00:17:59.151 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.151 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.151 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.411 { 00:17:59.411 "cntlid": 115, 00:17:59.411 "qid": 0, 00:17:59.411 "state": "enabled", 00:17:59.411 "thread": "nvmf_tgt_poll_group_000", 00:17:59.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:59.411 "listen_address": { 00:17:59.411 "trtype": "TCP", 00:17:59.411 "adrfam": "IPv4", 00:17:59.411 "traddr": "10.0.0.2", 00:17:59.411 "trsvcid": "4420" 00:17:59.411 }, 00:17:59.411 "peer_address": { 00:17:59.411 "trtype": "TCP", 00:17:59.411 "adrfam": "IPv4", 00:17:59.411 "traddr": "10.0.0.1", 00:17:59.411 "trsvcid": "46732" 00:17:59.411 }, 00:17:59.411 "auth": { 00:17:59.411 "state": "completed", 00:17:59.411 "digest": "sha512", 00:17:59.411 "dhgroup": "ffdhe3072" 00:17:59.411 } 00:17:59.411 } 00:17:59.411 ]' 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.411 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.671 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:17:59.671 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.612 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.873 00:18:00.873 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.873 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.873 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.134 { 00:18:01.134 "cntlid": 117, 00:18:01.134 "qid": 0, 00:18:01.134 "state": "enabled", 00:18:01.134 "thread": "nvmf_tgt_poll_group_000", 00:18:01.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:01.134 "listen_address": { 00:18:01.134 "trtype": "TCP", 00:18:01.134 "adrfam": "IPv4", 00:18:01.134 "traddr": "10.0.0.2", 00:18:01.134 "trsvcid": "4420" 00:18:01.134 }, 00:18:01.134 "peer_address": { 00:18:01.134 "trtype": "TCP", 00:18:01.134 "adrfam": "IPv4", 00:18:01.134 "traddr": "10.0.0.1", 00:18:01.134 "trsvcid": "46760" 00:18:01.134 }, 00:18:01.134 "auth": { 00:18:01.134 "state": "completed", 00:18:01.134 "digest": "sha512", 00:18:01.134 "dhgroup": "ffdhe3072" 00:18:01.134 } 00:18:01.134 } 00:18:01.134 ]' 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.134 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.396 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:18:01.396 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.398 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:02.399 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.399 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.399 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.399 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.399 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.399 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.717 00:18:02.717 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.717 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.717 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.979 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.979 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.979 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.979 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.979 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.979 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.979 { 00:18:02.979 "cntlid": 119, 00:18:02.979 "qid": 0, 00:18:02.979 "state": "enabled", 00:18:02.979 "thread": "nvmf_tgt_poll_group_000", 00:18:02.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:02.979 "listen_address": { 00:18:02.979 "trtype": "TCP", 00:18:02.979 "adrfam": "IPv4", 00:18:02.979 "traddr": "10.0.0.2", 00:18:02.979 "trsvcid": "4420" 00:18:02.979 }, 00:18:02.979 "peer_address": { 00:18:02.979 "trtype": "TCP", 00:18:02.979 "adrfam": "IPv4", 00:18:02.979 "traddr": "10.0.0.1", 00:18:02.979 "trsvcid": "46784" 00:18:02.979 }, 00:18:02.979 "auth": { 00:18:02.979 "state": "completed", 00:18:02.979 "digest": "sha512", 00:18:02.979 "dhgroup": "ffdhe3072" 00:18:02.979 } 00:18:02.979 } 00:18:02.979 ]' 00:18:02.979 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.979 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.979 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.979 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.979 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.979 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.979 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.979 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.240 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:03.240 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:03.812 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.812 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.812 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.812 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.812 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.813 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.813 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.813 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.813 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.073 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.334 00:18:04.334 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.334 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.334 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.593 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.593 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.593 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.593 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.593 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.594 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.594 { 00:18:04.594 "cntlid": 121, 00:18:04.594 "qid": 0, 00:18:04.594 "state": "enabled", 00:18:04.594 "thread": "nvmf_tgt_poll_group_000", 00:18:04.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:04.594 "listen_address": { 00:18:04.594 "trtype": "TCP", 00:18:04.594 "adrfam": "IPv4", 00:18:04.594 "traddr": "10.0.0.2", 00:18:04.594 "trsvcid": "4420" 00:18:04.594 }, 00:18:04.594 "peer_address": { 00:18:04.594 "trtype": "TCP", 00:18:04.594 "adrfam": "IPv4", 00:18:04.594 "traddr": "10.0.0.1", 00:18:04.594 "trsvcid": "46822" 00:18:04.594 }, 00:18:04.594 "auth": { 00:18:04.594 "state": "completed", 00:18:04.594 "digest": "sha512", 00:18:04.594 "dhgroup": "ffdhe4096" 00:18:04.594 } 00:18:04.594 } 00:18:04.594 ]' 00:18:04.594 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.594 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.594 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.594 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:04.594 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.594 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.594 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.594 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.854 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:18:04.854 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.799 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.800 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.800 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.800 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.800 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.800 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.800 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.800 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.061 00:18:06.061 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.061 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.061 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.322 { 00:18:06.322 "cntlid": 123, 00:18:06.322 "qid": 0, 00:18:06.322 "state": "enabled", 00:18:06.322 "thread": "nvmf_tgt_poll_group_000", 00:18:06.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:06.322 "listen_address": { 00:18:06.322 "trtype": "TCP", 00:18:06.322 "adrfam": "IPv4", 00:18:06.322 "traddr": "10.0.0.2", 00:18:06.322 "trsvcid": "4420" 00:18:06.322 }, 00:18:06.322 "peer_address": { 00:18:06.322 "trtype": "TCP", 00:18:06.322 "adrfam": "IPv4", 00:18:06.322 "traddr": "10.0.0.1", 00:18:06.322 "trsvcid": "35920" 00:18:06.322 }, 00:18:06.322 "auth": { 00:18:06.322 "state": "completed", 00:18:06.322 "digest": "sha512", 00:18:06.322 "dhgroup": "ffdhe4096" 00:18:06.322 } 00:18:06.322 } 00:18:06.322 ]' 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.322 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.584 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:18:06.584 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:18:07.155 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.416 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.675 00:18:07.675 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.675 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.675 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.935 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.935 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.935 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.935 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.935 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.935 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.935 { 00:18:07.935 "cntlid": 125, 00:18:07.935 "qid": 0, 00:18:07.935 "state": "enabled", 00:18:07.935 "thread": "nvmf_tgt_poll_group_000", 00:18:07.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:07.935 "listen_address": { 00:18:07.935 "trtype": "TCP", 00:18:07.935 "adrfam": "IPv4", 00:18:07.935 "traddr": "10.0.0.2", 00:18:07.935 "trsvcid": "4420" 00:18:07.935 }, 00:18:07.935 "peer_address": { 00:18:07.935 "trtype": "TCP", 00:18:07.935 "adrfam": "IPv4", 00:18:07.935 "traddr": "10.0.0.1", 00:18:07.935 "trsvcid": "35946" 00:18:07.935 }, 00:18:07.935 "auth": { 00:18:07.935 "state": "completed", 00:18:07.935 "digest": "sha512", 00:18:07.935 "dhgroup": "ffdhe4096" 00:18:07.935 } 00:18:07.935 } 00:18:07.935 ]' 00:18:07.935 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.935 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.935 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.935 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.935 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.194 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.194 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.194 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.194 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:18:08.194 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:18:09.133 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.133 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:09.133 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.133 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.133 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.133 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.133 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.133 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.133 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.392 00:18:09.392 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.392 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.392 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.652 { 00:18:09.652 "cntlid": 127, 00:18:09.652 "qid": 0, 00:18:09.652 "state": "enabled", 00:18:09.652 "thread": "nvmf_tgt_poll_group_000", 00:18:09.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:09.652 "listen_address": { 00:18:09.652 "trtype": "TCP", 00:18:09.652 "adrfam": "IPv4", 00:18:09.652 "traddr": "10.0.0.2", 00:18:09.652 "trsvcid": "4420" 00:18:09.652 }, 00:18:09.652 "peer_address": { 00:18:09.652 "trtype": "TCP", 00:18:09.652 "adrfam": "IPv4", 00:18:09.652 "traddr": "10.0.0.1", 00:18:09.652 "trsvcid": "35980" 00:18:09.652 }, 00:18:09.652 "auth": { 00:18:09.652 "state": "completed", 00:18:09.652 "digest": "sha512", 00:18:09.652 "dhgroup": "ffdhe4096" 00:18:09.652 } 00:18:09.652 } 00:18:09.652 ]' 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.652 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.911 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:09.911 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:10.479 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.479 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:10.479 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.479 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.480 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.480 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.480 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.480 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:10.480 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.740 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.000 00:18:11.259 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.259 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.259 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.259 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.259 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.259 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.259 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.259 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.259 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.259 { 00:18:11.259 "cntlid": 129, 00:18:11.259 "qid": 0, 00:18:11.259 "state": "enabled", 00:18:11.259 "thread": "nvmf_tgt_poll_group_000", 00:18:11.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:11.259 "listen_address": { 00:18:11.259 "trtype": "TCP", 00:18:11.259 "adrfam": "IPv4", 00:18:11.259 "traddr": "10.0.0.2", 00:18:11.259 "trsvcid": "4420" 00:18:11.259 }, 00:18:11.259 "peer_address": { 00:18:11.259 "trtype": "TCP", 00:18:11.259 "adrfam": "IPv4", 00:18:11.259 "traddr": "10.0.0.1", 00:18:11.259 "trsvcid": "36002" 00:18:11.259 }, 00:18:11.259 "auth": { 00:18:11.259 "state": "completed", 00:18:11.259 "digest": "sha512", 00:18:11.259 "dhgroup": "ffdhe6144" 00:18:11.259 } 00:18:11.259 } 00:18:11.259 ]' 00:18:11.260 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.519 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.519 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.519 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.519 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.519 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.519 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.519 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.778 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:18:11.778 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:18:12.348 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.348 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:12.348 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.348 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.348 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.348 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.348 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.348 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.608 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:12.608 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.608 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.608 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:12.608 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.608 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.608 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.608 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.609 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.609 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.609 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.609 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.609 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.869 00:18:12.869 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.869 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.869 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.129 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.129 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.129 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.129 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.129 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.129 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.129 { 00:18:13.129 "cntlid": 131, 00:18:13.129 "qid": 0, 00:18:13.129 "state": "enabled", 00:18:13.129 "thread": "nvmf_tgt_poll_group_000", 00:18:13.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:13.129 "listen_address": { 00:18:13.129 "trtype": "TCP", 00:18:13.129 "adrfam": "IPv4", 00:18:13.129 "traddr": "10.0.0.2", 00:18:13.129 "trsvcid": "4420" 00:18:13.129 }, 00:18:13.129 "peer_address": { 00:18:13.129 "trtype": "TCP", 00:18:13.129 "adrfam": "IPv4", 00:18:13.129 "traddr": "10.0.0.1", 00:18:13.129 "trsvcid": "36028" 00:18:13.129 }, 00:18:13.129 "auth": { 00:18:13.129 "state": "completed", 00:18:13.129 "digest": "sha512", 00:18:13.129 "dhgroup": "ffdhe6144" 00:18:13.129 } 00:18:13.129 } 00:18:13.129 ]' 00:18:13.129 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.129 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.129 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.129 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.129 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.390 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.390 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.391 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.391 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:18:13.391 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:18:14.331 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.331 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:14.331 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.331 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.332 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.901 00:18:14.901 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.901 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.901 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.901 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.901 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.901 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.901 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.901 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.901 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.901 { 00:18:14.901 "cntlid": 133, 00:18:14.901 "qid": 0, 00:18:14.901 "state": "enabled", 00:18:14.901 "thread": "nvmf_tgt_poll_group_000", 00:18:14.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:14.901 "listen_address": { 00:18:14.901 "trtype": "TCP", 00:18:14.901 "adrfam": "IPv4", 00:18:14.901 "traddr": "10.0.0.2", 00:18:14.901 "trsvcid": "4420" 00:18:14.901 }, 00:18:14.901 "peer_address": { 00:18:14.901 "trtype": "TCP", 00:18:14.901 "adrfam": "IPv4", 00:18:14.901 "traddr": "10.0.0.1", 00:18:14.901 "trsvcid": "36060" 00:18:14.901 }, 00:18:14.901 "auth": { 00:18:14.901 "state": "completed", 00:18:14.901 "digest": "sha512", 00:18:14.901 "dhgroup": "ffdhe6144" 00:18:14.901 } 00:18:14.901 } 00:18:14.901 ]' 00:18:14.901 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.161 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.161 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.161 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.161 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.161 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.161 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.161 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.422 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:18:15.422 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:18:16.020 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.020 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.020 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.020 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.020 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.020 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.020 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.020 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.281 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.541 00:18:16.541 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.541 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.541 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.802 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.802 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.802 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.802 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.802 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.802 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.802 { 00:18:16.802 "cntlid": 135, 00:18:16.802 "qid": 0, 00:18:16.802 "state": "enabled", 00:18:16.802 "thread": "nvmf_tgt_poll_group_000", 00:18:16.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:16.802 "listen_address": { 00:18:16.802 "trtype": "TCP", 00:18:16.802 "adrfam": "IPv4", 00:18:16.802 "traddr": "10.0.0.2", 00:18:16.802 "trsvcid": "4420" 00:18:16.802 }, 00:18:16.802 "peer_address": { 00:18:16.802 "trtype": "TCP", 00:18:16.802 "adrfam": "IPv4", 00:18:16.802 "traddr": "10.0.0.1", 00:18:16.802 "trsvcid": "47658" 00:18:16.802 }, 00:18:16.802 "auth": { 00:18:16.802 "state": "completed", 00:18:16.802 "digest": "sha512", 00:18:16.802 "dhgroup": "ffdhe6144" 00:18:16.802 } 00:18:16.802 } 00:18:16.802 ]' 00:18:16.802 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.802 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.802 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.802 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.063 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.063 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.063 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.063 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.063 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:17.063 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:18.004 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.004 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.004 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.004 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.004 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.004 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.004 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.004 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.004 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.575 00:18:18.575 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.575 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.575 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.836 { 00:18:18.836 "cntlid": 137, 00:18:18.836 "qid": 0, 00:18:18.836 "state": "enabled", 00:18:18.836 "thread": "nvmf_tgt_poll_group_000", 00:18:18.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:18.836 "listen_address": { 00:18:18.836 "trtype": "TCP", 00:18:18.836 "adrfam": "IPv4", 00:18:18.836 "traddr": "10.0.0.2", 00:18:18.836 "trsvcid": "4420" 00:18:18.836 }, 00:18:18.836 "peer_address": { 00:18:18.836 "trtype": "TCP", 00:18:18.836 "adrfam": "IPv4", 00:18:18.836 "traddr": "10.0.0.1", 00:18:18.836 "trsvcid": "47682" 00:18:18.836 }, 00:18:18.836 "auth": { 00:18:18.836 "state": "completed", 00:18:18.836 "digest": "sha512", 00:18:18.836 "dhgroup": "ffdhe8192" 00:18:18.836 } 00:18:18.836 } 00:18:18.836 ]' 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.836 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.098 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:18:19.098 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:18:20.040 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.040 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.040 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.040 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.040 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.040 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.040 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.040 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.040 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.612 00:18:20.612 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.612 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.612 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.873 { 00:18:20.873 "cntlid": 139, 00:18:20.873 "qid": 0, 00:18:20.873 "state": "enabled", 00:18:20.873 "thread": "nvmf_tgt_poll_group_000", 00:18:20.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:20.873 "listen_address": { 00:18:20.873 "trtype": "TCP", 00:18:20.873 "adrfam": "IPv4", 00:18:20.873 "traddr": "10.0.0.2", 00:18:20.873 "trsvcid": "4420" 00:18:20.873 }, 00:18:20.873 "peer_address": { 00:18:20.873 "trtype": "TCP", 00:18:20.873 "adrfam": "IPv4", 00:18:20.873 "traddr": "10.0.0.1", 00:18:20.873 "trsvcid": "47714" 00:18:20.873 }, 00:18:20.873 "auth": { 00:18:20.873 "state": "completed", 00:18:20.873 "digest": "sha512", 00:18:20.873 "dhgroup": "ffdhe8192" 00:18:20.873 } 00:18:20.873 } 00:18:20.873 ]' 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.873 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.873 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.873 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.134 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:18:21.134 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: --dhchap-ctrl-secret DHHC-1:02:NDRmNTYzZWE0NDdiNDU0MTdiOTNmODhhYzM0NDQ2NTkwNzE2YzM4MjVlYjRkOGJjRm2wTg==: 00:18:22.077 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.077 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.077 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.077 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.077 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.077 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.077 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.077 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.077 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.647 00:18:22.647 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.647 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.647 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.906 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.906 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.906 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.906 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.906 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.906 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.906 { 00:18:22.906 "cntlid": 141, 00:18:22.906 "qid": 0, 00:18:22.906 "state": "enabled", 00:18:22.906 "thread": "nvmf_tgt_poll_group_000", 00:18:22.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:22.906 "listen_address": { 00:18:22.906 "trtype": "TCP", 00:18:22.906 "adrfam": "IPv4", 00:18:22.906 "traddr": "10.0.0.2", 00:18:22.906 "trsvcid": "4420" 00:18:22.906 }, 00:18:22.906 "peer_address": { 00:18:22.906 "trtype": "TCP", 00:18:22.906 "adrfam": "IPv4", 00:18:22.906 "traddr": "10.0.0.1", 00:18:22.906 "trsvcid": "47746" 00:18:22.906 }, 00:18:22.906 "auth": { 00:18:22.906 "state": "completed", 00:18:22.906 "digest": "sha512", 00:18:22.906 "dhgroup": "ffdhe8192" 00:18:22.906 } 00:18:22.906 } 00:18:22.906 ]' 00:18:22.906 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.906 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.906 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.906 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.906 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.906 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.906 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.906 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.167 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:18:23.167 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:01:MWZlNTdlMGViOGEyNTgzYzYyZjNiM2JhMTc3NzdlZTTS5LEV: 00:18:24.112 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.112 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.112 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.112 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.112 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.112 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.112 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.112 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.112 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.684 00:18:24.684 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.684 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.684 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.944 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.944 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.944 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.944 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.944 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.944 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.944 { 00:18:24.944 "cntlid": 143, 00:18:24.944 "qid": 0, 00:18:24.944 "state": "enabled", 00:18:24.944 "thread": "nvmf_tgt_poll_group_000", 00:18:24.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:24.944 "listen_address": { 00:18:24.944 "trtype": "TCP", 00:18:24.944 "adrfam": "IPv4", 00:18:24.944 "traddr": "10.0.0.2", 00:18:24.944 "trsvcid": "4420" 00:18:24.944 }, 00:18:24.944 "peer_address": { 00:18:24.944 "trtype": "TCP", 00:18:24.944 "adrfam": "IPv4", 00:18:24.944 "traddr": "10.0.0.1", 00:18:24.944 "trsvcid": "47776" 00:18:24.944 }, 00:18:24.944 "auth": { 00:18:24.944 "state": "completed", 00:18:24.944 "digest": "sha512", 00:18:24.944 "dhgroup": "ffdhe8192" 00:18:24.944 } 00:18:24.944 } 00:18:24.944 ]' 00:18:24.944 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.944 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.944 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.944 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.944 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.944 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.944 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.944 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.204 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:25.204 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:26.149 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.149 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.149 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.723 00:18:26.723 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.723 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.723 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.983 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.983 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.984 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.984 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.984 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.984 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.984 { 00:18:26.984 "cntlid": 145, 00:18:26.984 "qid": 0, 00:18:26.984 "state": "enabled", 00:18:26.984 "thread": "nvmf_tgt_poll_group_000", 00:18:26.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:26.984 "listen_address": { 00:18:26.984 "trtype": "TCP", 00:18:26.984 "adrfam": "IPv4", 00:18:26.984 "traddr": "10.0.0.2", 00:18:26.984 "trsvcid": "4420" 00:18:26.984 }, 00:18:26.984 "peer_address": { 00:18:26.984 "trtype": "TCP", 00:18:26.984 "adrfam": "IPv4", 00:18:26.984 "traddr": "10.0.0.1", 00:18:26.984 "trsvcid": "35760" 00:18:26.984 }, 00:18:26.984 "auth": { 00:18:26.984 "state": "completed", 00:18:26.984 "digest": "sha512", 00:18:26.984 "dhgroup": "ffdhe8192" 00:18:26.984 } 00:18:26.984 } 00:18:26.984 ]' 00:18:26.984 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.984 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.984 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.984 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.984 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.984 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.984 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.984 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.244 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:18:27.244 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YWRmMWRjM2IxZTJmYjBmNWZjZjI1ZWMwODEwODdiMWY1ODU3MDg3MTNmODdjNTAyDguSPg==: --dhchap-ctrl-secret DHHC-1:03:NmJjMDk5NDhjY2JmODgyNTFiZmYwYjY2NDJmMzBiZGMyNDFjZGQ1MjY1YWE2MDM2Y2JkNDEwMzIzN2MxNGJkMXceY44=: 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:27.815 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:28.387 request: 00:18:28.387 { 00:18:28.387 "name": "nvme0", 00:18:28.387 "trtype": "tcp", 00:18:28.387 "traddr": "10.0.0.2", 00:18:28.387 "adrfam": "ipv4", 00:18:28.387 "trsvcid": "4420", 00:18:28.387 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:28.387 "prchk_reftag": false, 00:18:28.387 "prchk_guard": false, 00:18:28.387 "hdgst": false, 00:18:28.387 "ddgst": false, 00:18:28.387 "dhchap_key": "key2", 00:18:28.387 "allow_unrecognized_csi": false, 00:18:28.387 "method": "bdev_nvme_attach_controller", 00:18:28.387 "req_id": 1 00:18:28.387 } 00:18:28.387 Got JSON-RPC error response 00:18:28.387 response: 00:18:28.387 { 00:18:28.387 "code": -5, 00:18:28.387 "message": "Input/output error" 00:18:28.387 } 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:28.387 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.388 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.388 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.388 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.960 request: 00:18:28.960 { 00:18:28.960 "name": "nvme0", 00:18:28.960 "trtype": "tcp", 00:18:28.960 "traddr": "10.0.0.2", 00:18:28.960 "adrfam": "ipv4", 00:18:28.960 "trsvcid": "4420", 00:18:28.960 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:28.960 "prchk_reftag": false, 00:18:28.960 "prchk_guard": false, 00:18:28.960 "hdgst": false, 00:18:28.960 "ddgst": false, 00:18:28.960 "dhchap_key": "key1", 00:18:28.960 "dhchap_ctrlr_key": "ckey2", 00:18:28.960 "allow_unrecognized_csi": false, 00:18:28.960 "method": "bdev_nvme_attach_controller", 00:18:28.960 "req_id": 1 00:18:28.960 } 00:18:28.960 Got JSON-RPC error response 00:18:28.960 response: 00:18:28.960 { 00:18:28.960 "code": -5, 00:18:28.960 "message": "Input/output error" 00:18:28.960 } 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.960 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.539 request: 00:18:29.539 { 00:18:29.539 "name": "nvme0", 00:18:29.539 "trtype": "tcp", 00:18:29.539 "traddr": "10.0.0.2", 00:18:29.539 "adrfam": "ipv4", 00:18:29.539 "trsvcid": "4420", 00:18:29.539 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:29.539 "prchk_reftag": false, 00:18:29.539 "prchk_guard": false, 00:18:29.539 "hdgst": false, 00:18:29.539 "ddgst": false, 00:18:29.539 "dhchap_key": "key1", 00:18:29.539 "dhchap_ctrlr_key": "ckey1", 00:18:29.539 "allow_unrecognized_csi": false, 00:18:29.539 "method": "bdev_nvme_attach_controller", 00:18:29.539 "req_id": 1 00:18:29.539 } 00:18:29.539 Got JSON-RPC error response 00:18:29.539 response: 00:18:29.539 { 00:18:29.539 "code": -5, 00:18:29.539 "message": "Input/output error" 00:18:29.539 } 00:18:29.539 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:29.539 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:29.539 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:29.539 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:29.539 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:29.539 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.539 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.539 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.539 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3395548 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3395548 ']' 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3395548 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3395548 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3395548' 00:18:29.540 killing process with pid 3395548 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3395548 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3395548 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3422635 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3422635 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3422635 ']' 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.540 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3422635 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3422635 ']' 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.477 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.737 null0 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RRY 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.oSa ]] 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oSa 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gtV 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Yz8 ]] 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yz8 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.737 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.SQB 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.nZv ]] 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nZv 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Ara 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.738 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.676 nvme0n1 00:18:31.676 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.676 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.676 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.936 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.936 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.936 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.936 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.936 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.936 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.936 { 00:18:31.936 "cntlid": 1, 00:18:31.936 "qid": 0, 00:18:31.936 "state": "enabled", 00:18:31.936 "thread": "nvmf_tgt_poll_group_000", 00:18:31.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:31.936 "listen_address": { 00:18:31.936 "trtype": "TCP", 00:18:31.936 "adrfam": "IPv4", 00:18:31.936 "traddr": "10.0.0.2", 00:18:31.936 "trsvcid": "4420" 00:18:31.936 }, 00:18:31.936 "peer_address": { 00:18:31.936 "trtype": "TCP", 00:18:31.936 "adrfam": "IPv4", 00:18:31.936 "traddr": "10.0.0.1", 00:18:31.936 "trsvcid": "35816" 00:18:31.936 }, 00:18:31.936 "auth": { 00:18:31.936 "state": "completed", 00:18:31.936 "digest": "sha512", 00:18:31.936 "dhgroup": "ffdhe8192" 00:18:31.936 } 00:18:31.936 } 00:18:31.936 ]' 00:18:31.936 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.936 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.936 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.936 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.936 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.936 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.936 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.936 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.197 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:32.197 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:33.141 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.141 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.402 request: 00:18:33.402 { 00:18:33.402 "name": "nvme0", 00:18:33.402 "trtype": "tcp", 00:18:33.402 "traddr": "10.0.0.2", 00:18:33.402 "adrfam": "ipv4", 00:18:33.402 "trsvcid": "4420", 00:18:33.402 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:33.402 "prchk_reftag": false, 00:18:33.402 "prchk_guard": false, 00:18:33.402 "hdgst": false, 00:18:33.402 "ddgst": false, 00:18:33.402 "dhchap_key": "key3", 00:18:33.402 "allow_unrecognized_csi": false, 00:18:33.402 "method": "bdev_nvme_attach_controller", 00:18:33.402 "req_id": 1 00:18:33.402 } 00:18:33.402 Got JSON-RPC error response 00:18:33.402 response: 00:18:33.402 { 00:18:33.402 "code": -5, 00:18:33.402 "message": "Input/output error" 00:18:33.402 } 00:18:33.402 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:33.402 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.402 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.402 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.402 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:33.402 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:33.402 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:33.402 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.663 request: 00:18:33.663 { 00:18:33.663 "name": "nvme0", 00:18:33.663 "trtype": "tcp", 00:18:33.663 "traddr": "10.0.0.2", 00:18:33.663 "adrfam": "ipv4", 00:18:33.663 "trsvcid": "4420", 00:18:33.663 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:33.663 "prchk_reftag": false, 00:18:33.663 "prchk_guard": false, 00:18:33.663 "hdgst": false, 00:18:33.663 "ddgst": false, 00:18:33.663 "dhchap_key": "key3", 00:18:33.663 "allow_unrecognized_csi": false, 00:18:33.663 "method": "bdev_nvme_attach_controller", 00:18:33.663 "req_id": 1 00:18:33.663 } 00:18:33.663 Got JSON-RPC error response 00:18:33.663 response: 00:18:33.663 { 00:18:33.663 "code": -5, 00:18:33.663 "message": "Input/output error" 00:18:33.663 } 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:33.663 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:33.924 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.186 request: 00:18:34.186 { 00:18:34.186 "name": "nvme0", 00:18:34.186 "trtype": "tcp", 00:18:34.186 "traddr": "10.0.0.2", 00:18:34.186 "adrfam": "ipv4", 00:18:34.186 "trsvcid": "4420", 00:18:34.186 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:34.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:34.186 "prchk_reftag": false, 00:18:34.186 "prchk_guard": false, 00:18:34.186 "hdgst": false, 00:18:34.186 "ddgst": false, 00:18:34.186 "dhchap_key": "key0", 00:18:34.186 "dhchap_ctrlr_key": "key1", 00:18:34.186 "allow_unrecognized_csi": false, 00:18:34.186 "method": "bdev_nvme_attach_controller", 00:18:34.186 "req_id": 1 00:18:34.186 } 00:18:34.186 Got JSON-RPC error response 00:18:34.186 response: 00:18:34.186 { 00:18:34.186 "code": -5, 00:18:34.186 "message": "Input/output error" 00:18:34.186 } 00:18:34.186 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:34.186 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.186 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.186 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.186 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:34.186 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:34.186 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:34.447 nvme0n1 00:18:34.447 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:34.447 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:34.447 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.708 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.708 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.708 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.970 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:34.970 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.970 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.970 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.970 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:34.970 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:34.970 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:35.912 nvme0n1 00:18:35.912 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:35.912 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:35.912 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.912 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.912 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.912 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.912 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.912 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.912 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:35.912 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:35.913 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.174 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.174 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:36.175 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: --dhchap-ctrl-secret DHHC-1:03:MGIxMDY0MmUyMDcyN2NiYzBmYjg5NmYyN2I2N2U5MGNiYWQ1ZjM4NmRkMmJmNmYyYmIwMWJlYTdhMDJkZmU0M/mxtEM=: 00:18:36.748 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:36.748 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:36.748 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:36.748 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:36.748 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:36.748 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:36.748 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:36.748 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.748 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.010 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:37.010 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:37.010 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:37.010 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:37.010 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.010 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:37.010 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.010 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:37.010 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:37.010 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:37.584 request: 00:18:37.584 { 00:18:37.584 "name": "nvme0", 00:18:37.584 "trtype": "tcp", 00:18:37.584 "traddr": "10.0.0.2", 00:18:37.584 "adrfam": "ipv4", 00:18:37.584 "trsvcid": "4420", 00:18:37.584 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:37.584 "prchk_reftag": false, 00:18:37.584 "prchk_guard": false, 00:18:37.584 "hdgst": false, 00:18:37.584 "ddgst": false, 00:18:37.584 "dhchap_key": "key1", 00:18:37.584 "allow_unrecognized_csi": false, 00:18:37.584 "method": "bdev_nvme_attach_controller", 00:18:37.584 "req_id": 1 00:18:37.584 } 00:18:37.584 Got JSON-RPC error response 00:18:37.584 response: 00:18:37.584 { 00:18:37.584 "code": -5, 00:18:37.584 "message": "Input/output error" 00:18:37.584 } 00:18:37.584 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:37.584 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.584 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.584 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.584 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.584 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.584 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:38.525 nvme0n1 00:18:38.525 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:38.525 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:38.525 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.525 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.525 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.525 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.786 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.786 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.786 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.786 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.786 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:38.786 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:38.786 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:38.786 nvme0n1 00:18:39.048 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:39.048 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:39.048 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.048 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.048 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.048 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: '' 2s 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: ]] 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzBlNjg3YjhlMmUxM2Q1ODYyOWIzNjZjMDU3YzQyYTKs4klB: 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:39.309 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: 2s 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: ]] 00:18:41.224 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZjVmOWJkMjI2MTBjNjFlZDlhNWZjOWU2OTU1YzBiMjQ2MjlhODRhOWExYjkwZjA1nwKNPg==: 00:18:41.485 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:41.485 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:43.402 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:44.431 nvme0n1 00:18:44.431 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:44.431 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.431 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.431 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.431 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:44.431 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:44.714 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:44.714 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:44.714 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.973 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.973 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:44.973 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.973 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.973 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.973 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:44.974 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:45.234 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:45.234 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:45.234 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:45.494 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:45.753 request: 00:18:45.753 { 00:18:45.753 "name": "nvme0", 00:18:45.753 "dhchap_key": "key1", 00:18:45.753 "dhchap_ctrlr_key": "key3", 00:18:45.753 "method": "bdev_nvme_set_keys", 00:18:45.753 "req_id": 1 00:18:45.753 } 00:18:45.753 Got JSON-RPC error response 00:18:45.753 response: 00:18:45.753 { 00:18:45.753 "code": -13, 00:18:45.753 "message": "Permission denied" 00:18:45.753 } 00:18:46.011 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:46.011 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.011 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.011 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.011 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:46.011 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:46.011 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.011 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:46.011 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:46.952 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:46.952 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:46.952 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.212 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:47.212 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:47.212 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.212 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.212 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.212 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:47.212 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:47.212 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:48.153 nvme0n1 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:48.153 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:48.724 request: 00:18:48.724 { 00:18:48.724 "name": "nvme0", 00:18:48.724 "dhchap_key": "key2", 00:18:48.724 "dhchap_ctrlr_key": "key0", 00:18:48.724 "method": "bdev_nvme_set_keys", 00:18:48.724 "req_id": 1 00:18:48.724 } 00:18:48.724 Got JSON-RPC error response 00:18:48.724 response: 00:18:48.724 { 00:18:48.724 "code": -13, 00:18:48.724 "message": "Permission denied" 00:18:48.724 } 00:18:48.724 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:48.724 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:48.724 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:48.724 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:48.724 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:48.724 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:48.724 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.724 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:48.724 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:50.106 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:50.106 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:50.106 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3395588 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3395588 ']' 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3395588 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3395588 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3395588' 00:18:50.106 killing process with pid 3395588 00:18:50.106 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3395588 00:18:50.107 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3395588 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.367 rmmod nvme_tcp 00:18:50.367 rmmod nvme_fabrics 00:18:50.367 rmmod nvme_keyring 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3422635 ']' 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3422635 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3422635 ']' 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3422635 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3422635 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3422635' 00:18:50.367 killing process with pid 3422635 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3422635 00:18:50.367 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3422635 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.628 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.545 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:52.545 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.RRY /tmp/spdk.key-sha256.gtV /tmp/spdk.key-sha384.SQB /tmp/spdk.key-sha512.Ara /tmp/spdk.key-sha512.oSa /tmp/spdk.key-sha384.Yz8 /tmp/spdk.key-sha256.nZv '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:52.545 00:18:52.545 real 2m46.113s 00:18:52.545 user 6m9.068s 00:18:52.545 sys 0m25.495s 00:18:52.545 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.545 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.545 ************************************ 00:18:52.545 END TEST nvmf_auth_target 00:18:52.545 ************************************ 00:18:52.545 11:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:52.545 11:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:52.545 11:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:52.546 11:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.546 11:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:52.807 ************************************ 00:18:52.807 START TEST nvmf_bdevio_no_huge 00:18:52.807 ************************************ 00:18:52.807 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:52.807 * Looking for test storage... 00:18:52.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.807 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:52.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.808 --rc genhtml_branch_coverage=1 00:18:52.808 --rc genhtml_function_coverage=1 00:18:52.808 --rc genhtml_legend=1 00:18:52.808 --rc geninfo_all_blocks=1 00:18:52.808 --rc geninfo_unexecuted_blocks=1 00:18:52.808 00:18:52.808 ' 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:52.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.808 --rc genhtml_branch_coverage=1 00:18:52.808 --rc genhtml_function_coverage=1 00:18:52.808 --rc genhtml_legend=1 00:18:52.808 --rc geninfo_all_blocks=1 00:18:52.808 --rc geninfo_unexecuted_blocks=1 00:18:52.808 00:18:52.808 ' 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:52.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.808 --rc genhtml_branch_coverage=1 00:18:52.808 --rc genhtml_function_coverage=1 00:18:52.808 --rc genhtml_legend=1 00:18:52.808 --rc geninfo_all_blocks=1 00:18:52.808 --rc geninfo_unexecuted_blocks=1 00:18:52.808 00:18:52.808 ' 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:52.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.808 --rc genhtml_branch_coverage=1 00:18:52.808 --rc genhtml_function_coverage=1 00:18:52.808 --rc genhtml_legend=1 00:18:52.808 --rc geninfo_all_blocks=1 00:18:52.808 --rc geninfo_unexecuted_blocks=1 00:18:52.808 00:18:52.808 ' 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:52.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:52.808 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:52.809 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:00.964 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:00.964 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:00.964 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:00.965 Found net devices under 0000:31:00.0: cvl_0_0 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:00.965 Found net devices under 0000:31:00.1: cvl_0_1 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.965 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:01.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:19:01.227 00:19:01.227 --- 10.0.0.2 ping statistics --- 00:19:01.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.227 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:19:01.227 00:19:01.227 --- 10.0.0.1 ping statistics --- 00:19:01.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.227 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3431586 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3431586 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3431586 ']' 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.227 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:01.489 [2024-12-06 11:18:07.405723] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:01.489 [2024-12-06 11:18:07.405789] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:01.489 [2024-12-06 11:18:07.525443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.489 [2024-12-06 11:18:07.585994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.489 [2024-12-06 11:18:07.586042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.489 [2024-12-06 11:18:07.586051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.489 [2024-12-06 11:18:07.586058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.489 [2024-12-06 11:18:07.586063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.489 [2024-12-06 11:18:07.587513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:01.489 [2024-12-06 11:18:07.587646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:01.489 [2024-12-06 11:18:07.587820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:01.489 [2024-12-06 11:18:07.587986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.432 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.432 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:02.432 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.432 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.432 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.433 [2024-12-06 11:18:08.286693] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.433 Malloc0 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.433 [2024-12-06 11:18:08.340757] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:02.433 { 00:19:02.433 "params": { 00:19:02.433 "name": "Nvme$subsystem", 00:19:02.433 "trtype": "$TEST_TRANSPORT", 00:19:02.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:02.433 "adrfam": "ipv4", 00:19:02.433 "trsvcid": "$NVMF_PORT", 00:19:02.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:02.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:02.433 "hdgst": ${hdgst:-false}, 00:19:02.433 "ddgst": ${ddgst:-false} 00:19:02.433 }, 00:19:02.433 "method": "bdev_nvme_attach_controller" 00:19:02.433 } 00:19:02.433 EOF 00:19:02.433 )") 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:02.433 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:02.433 "params": { 00:19:02.433 "name": "Nvme1", 00:19:02.433 "trtype": "tcp", 00:19:02.433 "traddr": "10.0.0.2", 00:19:02.433 "adrfam": "ipv4", 00:19:02.433 "trsvcid": "4420", 00:19:02.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.433 "hdgst": false, 00:19:02.433 "ddgst": false 00:19:02.433 }, 00:19:02.433 "method": "bdev_nvme_attach_controller" 00:19:02.433 }' 00:19:02.433 [2024-12-06 11:18:08.397395] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:02.433 [2024-12-06 11:18:08.397465] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3431799 ] 00:19:02.433 [2024-12-06 11:18:08.487623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.433 [2024-12-06 11:18:08.543108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.433 [2024-12-06 11:18:08.543227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.433 [2024-12-06 11:18:08.543230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.694 I/O targets: 00:19:02.694 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:02.694 00:19:02.694 00:19:02.694 CUnit - A unit testing framework for C - Version 2.1-3 00:19:02.694 http://cunit.sourceforge.net/ 00:19:02.694 00:19:02.694 00:19:02.694 Suite: bdevio tests on: Nvme1n1 00:19:02.694 Test: blockdev write read block ...passed 00:19:02.694 Test: blockdev write zeroes read block ...passed 00:19:02.694 Test: blockdev write zeroes read no split ...passed 00:19:02.694 Test: blockdev write zeroes read split ...passed 00:19:02.694 Test: blockdev write zeroes read split partial ...passed 00:19:02.694 Test: blockdev reset ...[2024-12-06 11:18:08.838523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:02.694 [2024-12-06 11:18:08.838585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141af70 (9): Bad file descriptor 00:19:02.956 [2024-12-06 11:18:08.899774] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:02.956 passed 00:19:02.956 Test: blockdev write read 8 blocks ...passed 00:19:02.956 Test: blockdev write read size > 128k ...passed 00:19:02.956 Test: blockdev write read invalid size ...passed 00:19:02.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:02.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:02.956 Test: blockdev write read max offset ...passed 00:19:02.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:02.956 Test: blockdev writev readv 8 blocks ...passed 00:19:02.956 Test: blockdev writev readv 30 x 1block ...passed 00:19:02.956 Test: blockdev writev readv block ...passed 00:19:02.956 Test: blockdev writev readv size > 128k ...passed 00:19:02.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:02.956 Test: blockdev comparev and writev ...[2024-12-06 11:18:09.083429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.956 [2024-12-06 11:18:09.083455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:02.956 [2024-12-06 11:18:09.083467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.956 [2024-12-06 11:18:09.083473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:02.956 [2024-12-06 11:18:09.083971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.956 [2024-12-06 11:18:09.083981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:02.956 [2024-12-06 11:18:09.083991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.956 [2024-12-06 11:18:09.083997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:02.956 [2024-12-06 11:18:09.084459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.956 [2024-12-06 11:18:09.084467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:02.956 [2024-12-06 11:18:09.084476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.956 [2024-12-06 11:18:09.084482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:02.956 [2024-12-06 11:18:09.084976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.956 [2024-12-06 11:18:09.084984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:02.956 [2024-12-06 11:18:09.084994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:02.956 [2024-12-06 11:18:09.085000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:03.218 passed 00:19:03.218 Test: blockdev nvme passthru rw ...passed 00:19:03.218 Test: blockdev nvme passthru vendor specific ...[2024-12-06 11:18:09.169722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.218 [2024-12-06 11:18:09.169733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:03.218 [2024-12-06 11:18:09.170042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.218 [2024-12-06 11:18:09.170050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:03.218 [2024-12-06 11:18:09.170405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.218 [2024-12-06 11:18:09.170412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:03.218 [2024-12-06 11:18:09.170736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.218 [2024-12-06 11:18:09.170748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:03.218 passed 00:19:03.218 Test: blockdev nvme admin passthru ...passed 00:19:03.218 Test: blockdev copy ...passed 00:19:03.218 00:19:03.218 Run Summary: Type Total Ran Passed Failed Inactive 00:19:03.218 suites 1 1 n/a 0 0 00:19:03.218 tests 23 23 23 0 0 00:19:03.218 asserts 152 152 152 0 n/a 00:19:03.218 00:19:03.218 Elapsed time = 1.054 seconds 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:03.479 rmmod nvme_tcp 00:19:03.479 rmmod nvme_fabrics 00:19:03.479 rmmod nvme_keyring 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3431586 ']' 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3431586 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3431586 ']' 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3431586 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3431586 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3431586' 00:19:03.479 killing process with pid 3431586 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3431586 00:19:03.479 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3431586 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.741 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.289 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:06.289 00:19:06.289 real 0m13.238s 00:19:06.289 user 0m13.302s 00:19:06.289 sys 0m7.306s 00:19:06.289 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.289 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:06.289 ************************************ 00:19:06.289 END TEST nvmf_bdevio_no_huge 00:19:06.289 ************************************ 00:19:06.289 11:18:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:06.289 ************************************ 00:19:06.289 START TEST nvmf_tls 00:19:06.289 ************************************ 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:06.289 * Looking for test storage... 00:19:06.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:06.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.289 --rc genhtml_branch_coverage=1 00:19:06.289 --rc genhtml_function_coverage=1 00:19:06.289 --rc genhtml_legend=1 00:19:06.289 --rc geninfo_all_blocks=1 00:19:06.289 --rc geninfo_unexecuted_blocks=1 00:19:06.289 00:19:06.289 ' 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:06.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.289 --rc genhtml_branch_coverage=1 00:19:06.289 --rc genhtml_function_coverage=1 00:19:06.289 --rc genhtml_legend=1 00:19:06.289 --rc geninfo_all_blocks=1 00:19:06.289 --rc geninfo_unexecuted_blocks=1 00:19:06.289 00:19:06.289 ' 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:06.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.289 --rc genhtml_branch_coverage=1 00:19:06.289 --rc genhtml_function_coverage=1 00:19:06.289 --rc genhtml_legend=1 00:19:06.289 --rc geninfo_all_blocks=1 00:19:06.289 --rc geninfo_unexecuted_blocks=1 00:19:06.289 00:19:06.289 ' 00:19:06.289 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:06.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.289 --rc genhtml_branch_coverage=1 00:19:06.289 --rc genhtml_function_coverage=1 00:19:06.289 --rc genhtml_legend=1 00:19:06.289 --rc geninfo_all_blocks=1 00:19:06.289 --rc geninfo_unexecuted_blocks=1 00:19:06.289 00:19:06.289 ' 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:06.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:06.290 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:14.432 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:14.432 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:14.432 Found net devices under 0000:31:00.0: cvl_0_0 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:14.432 Found net devices under 0000:31:00.1: cvl_0_1 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:14.432 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:14.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:19:14.693 00:19:14.693 --- 10.0.0.2 ping statistics --- 00:19:14.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.693 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:19:14.693 00:19:14.693 --- 10.0.0.1 ping statistics --- 00:19:14.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.693 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3437208 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3437208 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3437208 ']' 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.693 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.693 [2024-12-06 11:18:20.838155] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:14.693 [2024-12-06 11:18:20.838224] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.961 [2024-12-06 11:18:20.951674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.961 [2024-12-06 11:18:21.002655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.961 [2024-12-06 11:18:21.002706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.961 [2024-12-06 11:18:21.002715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.961 [2024-12-06 11:18:21.002723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.961 [2024-12-06 11:18:21.002730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.961 [2024-12-06 11:18:21.003507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.533 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.533 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:15.533 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.533 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.533 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.533 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.793 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:15.793 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:15.793 true 00:19:15.793 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:15.793 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:16.053 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:16.053 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:16.053 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:16.314 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:16.314 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:16.314 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:16.314 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:16.314 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:16.575 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:16.575 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:16.836 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:16.836 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:16.836 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:16.836 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:17.096 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:17.096 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:17.096 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:17.096 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:17.096 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:17.356 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:17.356 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:17.356 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:17.617 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:17.877 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:17.877 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:17.877 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.DRwEjVlhru 00:19:17.877 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:17.877 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.gVuBHpWWEW 00:19:17.877 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:17.877 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:17.877 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.DRwEjVlhru 00:19:17.877 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.gVuBHpWWEW 00:19:17.877 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:17.877 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:18.138 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.DRwEjVlhru 00:19:18.138 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DRwEjVlhru 00:19:18.138 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:18.399 [2024-12-06 11:18:24.370541] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.399 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:18.399 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:18.659 [2024-12-06 11:18:24.695321] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.659 [2024-12-06 11:18:24.695517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.659 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:18.920 malloc0 00:19:18.920 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:18.920 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DRwEjVlhru 00:19:19.180 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.440 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.DRwEjVlhru 00:19:29.440 Initializing NVMe Controllers 00:19:29.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:29.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:29.440 Initialization complete. Launching workers. 00:19:29.440 ======================================================== 00:19:29.440 Latency(us) 00:19:29.440 Device Information : IOPS MiB/s Average min max 00:19:29.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18642.28 72.82 3433.04 1135.37 4172.90 00:19:29.440 ======================================================== 00:19:29.440 Total : 18642.28 72.82 3433.04 1135.37 4172.90 00:19:29.440 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DRwEjVlhru 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DRwEjVlhru 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3440150 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3440150 /var/tmp/bdevperf.sock 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3440150 ']' 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.440 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.440 [2024-12-06 11:18:35.523057] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:29.440 [2024-12-06 11:18:35.523125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3440150 ] 00:19:29.440 [2024-12-06 11:18:35.588133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.701 [2024-12-06 11:18:35.617228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.701 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.701 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:29.701 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DRwEjVlhru 00:19:29.962 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.962 [2024-12-06 11:18:36.031794] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.962 TLSTESTn1 00:19:29.962 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:30.224 Running I/O for 10 seconds... 00:19:32.109 5867.00 IOPS, 22.92 MiB/s [2024-12-06T10:18:39.219Z] 5972.00 IOPS, 23.33 MiB/s [2024-12-06T10:18:40.604Z] 5990.67 IOPS, 23.40 MiB/s [2024-12-06T10:18:41.546Z] 6068.00 IOPS, 23.70 MiB/s [2024-12-06T10:18:42.487Z] 6058.00 IOPS, 23.66 MiB/s [2024-12-06T10:18:43.430Z] 5840.67 IOPS, 22.82 MiB/s [2024-12-06T10:18:44.375Z] 5889.00 IOPS, 23.00 MiB/s [2024-12-06T10:18:45.319Z] 5901.38 IOPS, 23.05 MiB/s [2024-12-06T10:18:46.261Z] 5912.44 IOPS, 23.10 MiB/s [2024-12-06T10:18:46.261Z] 5893.30 IOPS, 23.02 MiB/s 00:19:40.094 Latency(us) 00:19:40.094 [2024-12-06T10:18:46.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.094 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.094 Verification LBA range: start 0x0 length 0x2000 00:19:40.094 TLSTESTn1 : 10.01 5898.54 23.04 0.00 0.00 21669.94 4669.44 47185.92 00:19:40.094 [2024-12-06T10:18:46.261Z] =================================================================================================================== 00:19:40.094 [2024-12-06T10:18:46.261Z] Total : 5898.54 23.04 0.00 0.00 21669.94 4669.44 47185.92 00:19:40.094 { 00:19:40.094 "results": [ 00:19:40.094 { 00:19:40.094 "job": "TLSTESTn1", 00:19:40.094 "core_mask": "0x4", 00:19:40.094 "workload": "verify", 00:19:40.094 "status": "finished", 00:19:40.094 "verify_range": { 00:19:40.094 "start": 0, 00:19:40.094 "length": 8192 00:19:40.094 }, 00:19:40.094 "queue_depth": 128, 00:19:40.094 "io_size": 4096, 00:19:40.094 "runtime": 10.012472, 00:19:40.094 "iops": 5898.543336750405, 00:19:40.094 "mibps": 23.04118490918127, 00:19:40.094 "io_failed": 0, 00:19:40.094 "io_timeout": 0, 00:19:40.094 "avg_latency_us": 21669.94490526423, 00:19:40.094 "min_latency_us": 4669.44, 00:19:40.094 "max_latency_us": 47185.92 00:19:40.094 } 00:19:40.094 ], 00:19:40.094 "core_count": 1 00:19:40.094 } 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3440150 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3440150 ']' 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3440150 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3440150 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3440150' 00:19:40.353 killing process with pid 3440150 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3440150 00:19:40.353 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.353 00:19:40.353 Latency(us) 00:19:40.353 [2024-12-06T10:18:46.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.353 [2024-12-06T10:18:46.520Z] =================================================================================================================== 00:19:40.353 [2024-12-06T10:18:46.520Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3440150 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gVuBHpWWEW 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gVuBHpWWEW 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gVuBHpWWEW 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gVuBHpWWEW 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3442165 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3442165 /var/tmp/bdevperf.sock 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3442165 ']' 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.353 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.353 [2024-12-06 11:18:46.494498] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:40.353 [2024-12-06 11:18:46.494552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3442165 ] 00:19:40.613 [2024-12-06 11:18:46.559229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.613 [2024-12-06 11:18:46.586751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.613 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.613 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:40.613 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gVuBHpWWEW 00:19:40.873 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.873 [2024-12-06 11:18:47.005549] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.873 [2024-12-06 11:18:47.011541] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:40.873 [2024-12-06 11:18:47.011673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19275b0 (107): Transport endpoint is not connected 00:19:40.873 [2024-12-06 11:18:47.012669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19275b0 (9): Bad file descriptor 00:19:40.873 [2024-12-06 11:18:47.013671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:40.873 [2024-12-06 11:18:47.013679] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:40.873 [2024-12-06 11:18:47.013685] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:40.873 [2024-12-06 11:18:47.013693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:40.873 request: 00:19:40.873 { 00:19:40.873 "name": "TLSTEST", 00:19:40.873 "trtype": "tcp", 00:19:40.873 "traddr": "10.0.0.2", 00:19:40.873 "adrfam": "ipv4", 00:19:40.873 "trsvcid": "4420", 00:19:40.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.873 "prchk_reftag": false, 00:19:40.873 "prchk_guard": false, 00:19:40.873 "hdgst": false, 00:19:40.873 "ddgst": false, 00:19:40.873 "psk": "key0", 00:19:40.873 "allow_unrecognized_csi": false, 00:19:40.873 "method": "bdev_nvme_attach_controller", 00:19:40.873 "req_id": 1 00:19:40.873 } 00:19:40.873 Got JSON-RPC error response 00:19:40.873 response: 00:19:40.873 { 00:19:40.873 "code": -5, 00:19:40.873 "message": "Input/output error" 00:19:40.873 } 00:19:41.132 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3442165 00:19:41.132 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3442165 ']' 00:19:41.132 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3442165 00:19:41.132 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:41.132 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.132 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3442165 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3442165' 00:19:41.133 killing process with pid 3442165 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3442165 00:19:41.133 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.133 00:19:41.133 Latency(us) 00:19:41.133 [2024-12-06T10:18:47.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.133 [2024-12-06T10:18:47.300Z] =================================================================================================================== 00:19:41.133 [2024-12-06T10:18:47.300Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3442165 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DRwEjVlhru 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DRwEjVlhru 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DRwEjVlhru 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DRwEjVlhru 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3442467 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3442467 /var/tmp/bdevperf.sock 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3442467 ']' 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.133 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.133 [2024-12-06 11:18:47.265458] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:41.133 [2024-12-06 11:18:47.265518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3442467 ] 00:19:41.393 [2024-12-06 11:18:47.328593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.393 [2024-12-06 11:18:47.357187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.393 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.393 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:41.393 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DRwEjVlhru 00:19:41.653 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:41.653 [2024-12-06 11:18:47.759452] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.653 [2024-12-06 11:18:47.763904] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:41.653 [2024-12-06 11:18:47.763924] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:41.653 [2024-12-06 11:18:47.763943] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:41.653 [2024-12-06 11:18:47.764594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb75b0 (107): Transport endpoint is not connected 00:19:41.654 [2024-12-06 11:18:47.765589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb75b0 (9): Bad file descriptor 00:19:41.654 [2024-12-06 11:18:47.766591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:41.654 [2024-12-06 11:18:47.766598] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:41.654 [2024-12-06 11:18:47.766603] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:41.654 [2024-12-06 11:18:47.766611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:41.654 request: 00:19:41.654 { 00:19:41.654 "name": "TLSTEST", 00:19:41.654 "trtype": "tcp", 00:19:41.654 "traddr": "10.0.0.2", 00:19:41.654 "adrfam": "ipv4", 00:19:41.654 "trsvcid": "4420", 00:19:41.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.654 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:41.654 "prchk_reftag": false, 00:19:41.654 "prchk_guard": false, 00:19:41.654 "hdgst": false, 00:19:41.654 "ddgst": false, 00:19:41.654 "psk": "key0", 00:19:41.654 "allow_unrecognized_csi": false, 00:19:41.654 "method": "bdev_nvme_attach_controller", 00:19:41.654 "req_id": 1 00:19:41.654 } 00:19:41.654 Got JSON-RPC error response 00:19:41.654 response: 00:19:41.654 { 00:19:41.654 "code": -5, 00:19:41.654 "message": "Input/output error" 00:19:41.654 } 00:19:41.654 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3442467 00:19:41.654 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3442467 ']' 00:19:41.654 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3442467 00:19:41.654 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:41.654 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.654 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3442467 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3442467' 00:19:41.913 killing process with pid 3442467 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3442467 00:19:41.913 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.913 00:19:41.913 Latency(us) 00:19:41.913 [2024-12-06T10:18:48.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.913 [2024-12-06T10:18:48.080Z] =================================================================================================================== 00:19:41.913 [2024-12-06T10:18:48.080Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3442467 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DRwEjVlhru 00:19:41.913 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DRwEjVlhru 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DRwEjVlhru 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DRwEjVlhru 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3442520 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3442520 /var/tmp/bdevperf.sock 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3442520 ']' 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.914 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.914 [2024-12-06 11:18:48.011636] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:41.914 [2024-12-06 11:18:48.011692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3442520 ] 00:19:41.914 [2024-12-06 11:18:48.076397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.174 [2024-12-06 11:18:48.104512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.174 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.174 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:42.174 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DRwEjVlhru 00:19:42.511 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:42.511 [2024-12-06 11:18:48.526933] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.511 [2024-12-06 11:18:48.535284] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:42.511 [2024-12-06 11:18:48.535302] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:42.511 [2024-12-06 11:18:48.535321] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:42.511 [2024-12-06 11:18:48.536240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8505b0 (107): Transport endpoint is not connected 00:19:42.511 [2024-12-06 11:18:48.537236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8505b0 (9): Bad file descriptor 00:19:42.511 [2024-12-06 11:18:48.538238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:42.511 [2024-12-06 11:18:48.538247] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:42.511 [2024-12-06 11:18:48.538253] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:42.511 [2024-12-06 11:18:48.538261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:42.511 request: 00:19:42.511 { 00:19:42.511 "name": "TLSTEST", 00:19:42.511 "trtype": "tcp", 00:19:42.511 "traddr": "10.0.0.2", 00:19:42.511 "adrfam": "ipv4", 00:19:42.511 "trsvcid": "4420", 00:19:42.511 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:42.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.511 "prchk_reftag": false, 00:19:42.511 "prchk_guard": false, 00:19:42.511 "hdgst": false, 00:19:42.511 "ddgst": false, 00:19:42.511 "psk": "key0", 00:19:42.511 "allow_unrecognized_csi": false, 00:19:42.511 "method": "bdev_nvme_attach_controller", 00:19:42.511 "req_id": 1 00:19:42.511 } 00:19:42.511 Got JSON-RPC error response 00:19:42.511 response: 00:19:42.511 { 00:19:42.511 "code": -5, 00:19:42.511 "message": "Input/output error" 00:19:42.511 } 00:19:42.511 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3442520 00:19:42.511 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3442520 ']' 00:19:42.511 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3442520 00:19:42.511 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:42.511 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.512 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3442520 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3442520' 00:19:42.839 killing process with pid 3442520 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3442520 00:19:42.839 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.839 00:19:42.839 Latency(us) 00:19:42.839 [2024-12-06T10:18:49.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.839 [2024-12-06T10:18:49.006Z] =================================================================================================================== 00:19:42.839 [2024-12-06T10:18:49.006Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3442520 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3442711 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3442711 /var/tmp/bdevperf.sock 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.839 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3442711 ']' 00:19:42.840 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.840 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.840 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.840 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.840 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.840 [2024-12-06 11:18:48.786624] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:42.840 [2024-12-06 11:18:48.786681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3442711 ] 00:19:42.840 [2024-12-06 11:18:48.851150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.840 [2024-12-06 11:18:48.879450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.840 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.840 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:42.840 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:43.136 [2024-12-06 11:18:49.117403] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:43.136 [2024-12-06 11:18:49.117429] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:43.136 request: 00:19:43.136 { 00:19:43.136 "name": "key0", 00:19:43.136 "path": "", 00:19:43.136 "method": "keyring_file_add_key", 00:19:43.136 "req_id": 1 00:19:43.136 } 00:19:43.136 Got JSON-RPC error response 00:19:43.136 response: 00:19:43.136 { 00:19:43.136 "code": -1, 00:19:43.136 "message": "Operation not permitted" 00:19:43.136 } 00:19:43.136 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.136 [2024-12-06 11:18:49.301949] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.136 [2024-12-06 11:18:49.301975] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:43.396 request: 00:19:43.396 { 00:19:43.396 "name": "TLSTEST", 00:19:43.396 "trtype": "tcp", 00:19:43.396 "traddr": "10.0.0.2", 00:19:43.396 "adrfam": "ipv4", 00:19:43.396 "trsvcid": "4420", 00:19:43.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.396 "prchk_reftag": false, 00:19:43.396 "prchk_guard": false, 00:19:43.396 "hdgst": false, 00:19:43.396 "ddgst": false, 00:19:43.396 "psk": "key0", 00:19:43.396 "allow_unrecognized_csi": false, 00:19:43.396 "method": "bdev_nvme_attach_controller", 00:19:43.396 "req_id": 1 00:19:43.396 } 00:19:43.396 Got JSON-RPC error response 00:19:43.396 response: 00:19:43.396 { 00:19:43.396 "code": -126, 00:19:43.396 "message": "Required key not available" 00:19:43.396 } 00:19:43.396 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3442711 00:19:43.396 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3442711 ']' 00:19:43.396 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3442711 00:19:43.396 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.396 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.396 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3442711 00:19:43.396 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:43.396 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:43.396 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3442711' 00:19:43.396 killing process with pid 3442711 00:19:43.396 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3442711 00:19:43.396 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.396 00:19:43.396 Latency(us) 00:19:43.396 [2024-12-06T10:18:49.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.396 [2024-12-06T10:18:49.563Z] =================================================================================================================== 00:19:43.396 [2024-12-06T10:18:49.564Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3442711 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3437208 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3437208 ']' 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3437208 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3437208 00:19:43.397 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:43.656 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:43.656 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3437208' 00:19:43.656 killing process with pid 3437208 00:19:43.656 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3437208 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3437208 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.LespPu6TWf 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.LespPu6TWf 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3442890 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3442890 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3442890 ']' 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.657 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.657 [2024-12-06 11:18:49.797549] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:43.657 [2024-12-06 11:18:49.797611] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.916 [2024-12-06 11:18:49.894302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.916 [2024-12-06 11:18:49.922034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.916 [2024-12-06 11:18:49.922074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.916 [2024-12-06 11:18:49.922081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.916 [2024-12-06 11:18:49.922085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.916 [2024-12-06 11:18:49.922089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.916 [2024-12-06 11:18:49.922532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.485 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.485 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:44.485 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.485 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.485 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.485 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.485 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.LespPu6TWf 00:19:44.485 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LespPu6TWf 00:19:44.485 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:44.745 [2024-12-06 11:18:50.767155] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.745 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:45.004 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:45.004 [2024-12-06 11:18:51.120029] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.004 [2024-12-06 11:18:51.120238] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.004 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:45.264 malloc0 00:19:45.264 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:45.525 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LespPu6TWf 00:19:45.525 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LespPu6TWf 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LespPu6TWf 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3443268 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3443268 /var/tmp/bdevperf.sock 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3443268 ']' 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.787 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.788 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.788 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.788 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.788 [2024-12-06 11:18:51.856472] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:45.788 [2024-12-06 11:18:51.856528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3443268 ] 00:19:45.788 [2024-12-06 11:18:51.920142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.788 [2024-12-06 11:18:51.949234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.729 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.729 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:46.729 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LespPu6TWf 00:19:46.729 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.990 [2024-12-06 11:18:52.953143] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.990 TLSTESTn1 00:19:46.990 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:46.990 Running I/O for 10 seconds... 00:19:49.316 5160.00 IOPS, 20.16 MiB/s [2024-12-06T10:18:56.424Z] 5492.50 IOPS, 21.46 MiB/s [2024-12-06T10:18:57.365Z] 5400.67 IOPS, 21.10 MiB/s [2024-12-06T10:18:58.321Z] 5702.00 IOPS, 22.27 MiB/s [2024-12-06T10:18:59.262Z] 5507.00 IOPS, 21.51 MiB/s [2024-12-06T10:19:00.202Z] 5394.67 IOPS, 21.07 MiB/s [2024-12-06T10:19:01.584Z] 5309.00 IOPS, 20.74 MiB/s [2024-12-06T10:19:02.156Z] 5413.12 IOPS, 21.15 MiB/s [2024-12-06T10:19:03.541Z] 5507.33 IOPS, 21.51 MiB/s [2024-12-06T10:19:03.541Z] 5432.90 IOPS, 21.22 MiB/s 00:19:57.374 Latency(us) 00:19:57.374 [2024-12-06T10:19:03.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.374 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:57.374 Verification LBA range: start 0x0 length 0x2000 00:19:57.374 TLSTESTn1 : 10.05 5418.49 21.17 0.00 0.00 23550.30 4532.91 46967.47 00:19:57.374 [2024-12-06T10:19:03.541Z] =================================================================================================================== 00:19:57.374 [2024-12-06T10:19:03.541Z] Total : 5418.49 21.17 0.00 0.00 23550.30 4532.91 46967.47 00:19:57.374 { 00:19:57.374 "results": [ 00:19:57.374 { 00:19:57.374 "job": "TLSTESTn1", 00:19:57.374 "core_mask": "0x4", 00:19:57.374 "workload": "verify", 00:19:57.374 "status": "finished", 00:19:57.374 "verify_range": { 00:19:57.374 "start": 0, 00:19:57.374 "length": 8192 00:19:57.374 }, 00:19:57.374 "queue_depth": 128, 00:19:57.374 "io_size": 4096, 00:19:57.374 "runtime": 10.050219, 00:19:57.374 "iops": 5418.488890640095, 00:19:57.374 "mibps": 21.16597222906287, 00:19:57.374 "io_failed": 0, 00:19:57.374 "io_timeout": 0, 00:19:57.374 "avg_latency_us": 23550.304145533784, 00:19:57.374 "min_latency_us": 4532.906666666667, 00:19:57.374 "max_latency_us": 46967.46666666667 00:19:57.374 } 00:19:57.374 ], 00:19:57.374 "core_count": 1 00:19:57.374 } 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3443268 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3443268 ']' 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3443268 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3443268 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3443268' 00:19:57.374 killing process with pid 3443268 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3443268 00:19:57.374 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.374 00:19:57.374 Latency(us) 00:19:57.374 [2024-12-06T10:19:03.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.374 [2024-12-06T10:19:03.541Z] =================================================================================================================== 00:19:57.374 [2024-12-06T10:19:03.541Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3443268 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.LespPu6TWf 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LespPu6TWf 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LespPu6TWf 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LespPu6TWf 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LespPu6TWf 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3445595 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3445595 /var/tmp/bdevperf.sock 00:19:57.374 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.375 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3445595 ']' 00:19:57.375 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.375 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.375 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.375 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.375 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.375 [2024-12-06 11:19:03.455767] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:57.375 [2024-12-06 11:19:03.455822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3445595 ] 00:19:57.375 [2024-12-06 11:19:03.520149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.635 [2024-12-06 11:19:03.547692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.635 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.635 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:57.635 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LespPu6TWf 00:19:57.635 [2024-12-06 11:19:03.781677] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.LespPu6TWf': 0100666 00:19:57.635 [2024-12-06 11:19:03.781704] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:57.635 request: 00:19:57.635 { 00:19:57.635 "name": "key0", 00:19:57.635 "path": "/tmp/tmp.LespPu6TWf", 00:19:57.635 "method": "keyring_file_add_key", 00:19:57.635 "req_id": 1 00:19:57.635 } 00:19:57.635 Got JSON-RPC error response 00:19:57.635 response: 00:19:57.635 { 00:19:57.635 "code": -1, 00:19:57.635 "message": "Operation not permitted" 00:19:57.635 } 00:19:57.895 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:57.895 [2024-12-06 11:19:03.966219] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.895 [2024-12-06 11:19:03.966246] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:57.895 request: 00:19:57.895 { 00:19:57.895 "name": "TLSTEST", 00:19:57.895 "trtype": "tcp", 00:19:57.895 "traddr": "10.0.0.2", 00:19:57.895 "adrfam": "ipv4", 00:19:57.895 "trsvcid": "4420", 00:19:57.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.895 "prchk_reftag": false, 00:19:57.895 "prchk_guard": false, 00:19:57.895 "hdgst": false, 00:19:57.895 "ddgst": false, 00:19:57.895 "psk": "key0", 00:19:57.895 "allow_unrecognized_csi": false, 00:19:57.895 "method": "bdev_nvme_attach_controller", 00:19:57.895 "req_id": 1 00:19:57.895 } 00:19:57.895 Got JSON-RPC error response 00:19:57.895 response: 00:19:57.895 { 00:19:57.895 "code": -126, 00:19:57.895 "message": "Required key not available" 00:19:57.895 } 00:19:57.895 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3445595 00:19:57.895 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3445595 ']' 00:19:57.895 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3445595 00:19:57.895 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:57.895 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.895 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3445595 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3445595' 00:19:58.155 killing process with pid 3445595 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3445595 00:19:58.155 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.155 00:19:58.155 Latency(us) 00:19:58.155 [2024-12-06T10:19:04.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.155 [2024-12-06T10:19:04.322Z] =================================================================================================================== 00:19:58.155 [2024-12-06T10:19:04.322Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3445595 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3442890 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3442890 ']' 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3442890 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3442890 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3442890' 00:19:58.155 killing process with pid 3442890 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3442890 00:19:58.155 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3442890 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3445798 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3445798 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3445798 ']' 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.415 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.415 [2024-12-06 11:19:04.399723] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:58.415 [2024-12-06 11:19:04.399778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.415 [2024-12-06 11:19:04.498868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.415 [2024-12-06 11:19:04.531381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.415 [2024-12-06 11:19:04.531418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.415 [2024-12-06 11:19:04.531424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.415 [2024-12-06 11:19:04.531429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.415 [2024-12-06 11:19:04.531434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.415 [2024-12-06 11:19:04.531959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.LespPu6TWf 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.LespPu6TWf 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.LespPu6TWf 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LespPu6TWf 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:59.355 [2024-12-06 11:19:05.387261] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.355 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:59.616 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:59.616 [2024-12-06 11:19:05.700018] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:59.616 [2024-12-06 11:19:05.700206] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.616 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:59.931 malloc0 00:19:59.931 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:59.931 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LespPu6TWf 00:20:00.192 [2024-12-06 11:19:06.190989] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.LespPu6TWf': 0100666 00:20:00.192 [2024-12-06 11:19:06.191012] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:00.192 request: 00:20:00.192 { 00:20:00.192 "name": "key0", 00:20:00.192 "path": "/tmp/tmp.LespPu6TWf", 00:20:00.192 "method": "keyring_file_add_key", 00:20:00.192 "req_id": 1 00:20:00.192 } 00:20:00.192 Got JSON-RPC error response 00:20:00.192 response: 00:20:00.192 { 00:20:00.192 "code": -1, 00:20:00.192 "message": "Operation not permitted" 00:20:00.192 } 00:20:00.192 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:00.192 [2024-12-06 11:19:06.343384] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:00.192 [2024-12-06 11:19:06.343409] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:00.192 request: 00:20:00.192 { 00:20:00.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.192 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.192 "psk": "key0", 00:20:00.192 "method": "nvmf_subsystem_add_host", 00:20:00.192 "req_id": 1 00:20:00.192 } 00:20:00.192 Got JSON-RPC error response 00:20:00.192 response: 00:20:00.192 { 00:20:00.192 "code": -32603, 00:20:00.192 "message": "Internal error" 00:20:00.192 } 00:20:00.192 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:00.192 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:00.192 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3445798 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3445798 ']' 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3445798 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3445798 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3445798' 00:20:00.453 killing process with pid 3445798 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3445798 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3445798 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.LespPu6TWf 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3446311 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3446311 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3446311 ']' 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.453 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:00.453 [2024-12-06 11:19:06.597693] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:00.453 [2024-12-06 11:19:06.597744] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.715 [2024-12-06 11:19:06.692454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.715 [2024-12-06 11:19:06.719973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.715 [2024-12-06 11:19:06.720004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.715 [2024-12-06 11:19:06.720010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.715 [2024-12-06 11:19:06.720015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.715 [2024-12-06 11:19:06.720019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.715 [2024-12-06 11:19:06.720491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.715 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.715 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:00.715 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.715 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.715 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.715 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.715 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.LespPu6TWf 00:20:00.715 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LespPu6TWf 00:20:00.715 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.975 [2024-12-06 11:19:06.983353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.975 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:01.236 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:01.237 [2024-12-06 11:19:07.308152] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.237 [2024-12-06 11:19:07.308358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.237 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:01.497 malloc0 00:20:01.497 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.756 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LespPu6TWf 00:20:01.756 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.016 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3446633 00:20:02.016 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.016 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3446633 /var/tmp/bdevperf.sock 00:20:02.016 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3446633 ']' 00:20:02.016 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.016 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.016 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.016 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.016 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.016 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:02.017 [2024-12-06 11:19:08.062720] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:02.017 [2024-12-06 11:19:08.062773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446633 ] 00:20:02.017 [2024-12-06 11:19:08.129037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.017 [2024-12-06 11:19:08.158006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.279 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.279 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:02.279 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LespPu6TWf 00:20:02.279 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.540 [2024-12-06 11:19:08.560360] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.540 TLSTESTn1 00:20:02.540 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:02.803 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:02.803 "subsystems": [ 00:20:02.803 { 00:20:02.803 "subsystem": "keyring", 00:20:02.803 "config": [ 00:20:02.803 { 00:20:02.803 "method": "keyring_file_add_key", 00:20:02.803 "params": { 00:20:02.803 "name": "key0", 00:20:02.803 "path": "/tmp/tmp.LespPu6TWf" 00:20:02.803 } 00:20:02.803 } 00:20:02.803 ] 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "subsystem": "iobuf", 00:20:02.803 "config": [ 00:20:02.803 { 00:20:02.803 "method": "iobuf_set_options", 00:20:02.803 "params": { 00:20:02.803 "small_pool_count": 8192, 00:20:02.803 "large_pool_count": 1024, 00:20:02.803 "small_bufsize": 8192, 00:20:02.803 "large_bufsize": 135168, 00:20:02.803 "enable_numa": false 00:20:02.803 } 00:20:02.803 } 00:20:02.803 ] 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "subsystem": "sock", 00:20:02.803 "config": [ 00:20:02.803 { 00:20:02.803 "method": "sock_set_default_impl", 00:20:02.803 "params": { 00:20:02.803 "impl_name": "posix" 00:20:02.803 } 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "method": "sock_impl_set_options", 00:20:02.803 "params": { 00:20:02.803 "impl_name": "ssl", 00:20:02.803 "recv_buf_size": 4096, 00:20:02.803 "send_buf_size": 4096, 00:20:02.803 "enable_recv_pipe": true, 00:20:02.803 "enable_quickack": false, 00:20:02.803 "enable_placement_id": 0, 00:20:02.803 "enable_zerocopy_send_server": true, 00:20:02.803 "enable_zerocopy_send_client": false, 00:20:02.803 "zerocopy_threshold": 0, 00:20:02.803 "tls_version": 0, 00:20:02.803 "enable_ktls": false 00:20:02.803 } 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "method": "sock_impl_set_options", 00:20:02.803 "params": { 00:20:02.803 "impl_name": "posix", 00:20:02.803 "recv_buf_size": 2097152, 00:20:02.803 "send_buf_size": 2097152, 00:20:02.803 "enable_recv_pipe": true, 00:20:02.803 "enable_quickack": false, 00:20:02.803 "enable_placement_id": 0, 00:20:02.803 "enable_zerocopy_send_server": true, 00:20:02.803 "enable_zerocopy_send_client": false, 00:20:02.803 "zerocopy_threshold": 0, 00:20:02.803 "tls_version": 0, 00:20:02.803 "enable_ktls": false 00:20:02.803 } 00:20:02.803 } 00:20:02.803 ] 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "subsystem": "vmd", 00:20:02.803 "config": [] 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "subsystem": "accel", 00:20:02.803 "config": [ 00:20:02.803 { 00:20:02.803 "method": "accel_set_options", 00:20:02.803 "params": { 00:20:02.803 "small_cache_size": 128, 00:20:02.803 "large_cache_size": 16, 00:20:02.803 "task_count": 2048, 00:20:02.803 "sequence_count": 2048, 00:20:02.803 "buf_count": 2048 00:20:02.803 } 00:20:02.803 } 00:20:02.803 ] 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "subsystem": "bdev", 00:20:02.803 "config": [ 00:20:02.803 { 00:20:02.803 "method": "bdev_set_options", 00:20:02.803 "params": { 00:20:02.803 "bdev_io_pool_size": 65535, 00:20:02.803 "bdev_io_cache_size": 256, 00:20:02.803 "bdev_auto_examine": true, 00:20:02.803 "iobuf_small_cache_size": 128, 00:20:02.803 "iobuf_large_cache_size": 16 00:20:02.803 } 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "method": "bdev_raid_set_options", 00:20:02.803 "params": { 00:20:02.803 "process_window_size_kb": 1024, 00:20:02.803 "process_max_bandwidth_mb_sec": 0 00:20:02.803 } 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "method": "bdev_iscsi_set_options", 00:20:02.803 "params": { 00:20:02.803 "timeout_sec": 30 00:20:02.803 } 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "method": "bdev_nvme_set_options", 00:20:02.803 "params": { 00:20:02.803 "action_on_timeout": "none", 00:20:02.803 "timeout_us": 0, 00:20:02.803 "timeout_admin_us": 0, 00:20:02.803 "keep_alive_timeout_ms": 10000, 00:20:02.803 "arbitration_burst": 0, 00:20:02.803 "low_priority_weight": 0, 00:20:02.803 "medium_priority_weight": 0, 00:20:02.803 "high_priority_weight": 0, 00:20:02.803 "nvme_adminq_poll_period_us": 10000, 00:20:02.803 "nvme_ioq_poll_period_us": 0, 00:20:02.803 "io_queue_requests": 0, 00:20:02.803 "delay_cmd_submit": true, 00:20:02.803 "transport_retry_count": 4, 00:20:02.803 "bdev_retry_count": 3, 00:20:02.803 "transport_ack_timeout": 0, 00:20:02.803 "ctrlr_loss_timeout_sec": 0, 00:20:02.803 "reconnect_delay_sec": 0, 00:20:02.803 "fast_io_fail_timeout_sec": 0, 00:20:02.803 "disable_auto_failback": false, 00:20:02.803 "generate_uuids": false, 00:20:02.803 "transport_tos": 0, 00:20:02.803 "nvme_error_stat": false, 00:20:02.803 "rdma_srq_size": 0, 00:20:02.803 "io_path_stat": false, 00:20:02.803 "allow_accel_sequence": false, 00:20:02.803 "rdma_max_cq_size": 0, 00:20:02.803 "rdma_cm_event_timeout_ms": 0, 00:20:02.803 "dhchap_digests": [ 00:20:02.803 "sha256", 00:20:02.803 "sha384", 00:20:02.803 "sha512" 00:20:02.803 ], 00:20:02.803 "dhchap_dhgroups": [ 00:20:02.803 "null", 00:20:02.803 "ffdhe2048", 00:20:02.803 "ffdhe3072", 00:20:02.803 "ffdhe4096", 00:20:02.803 "ffdhe6144", 00:20:02.803 "ffdhe8192" 00:20:02.803 ] 00:20:02.803 } 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "method": "bdev_nvme_set_hotplug", 00:20:02.803 "params": { 00:20:02.803 "period_us": 100000, 00:20:02.803 "enable": false 00:20:02.803 } 00:20:02.803 }, 00:20:02.803 { 00:20:02.803 "method": "bdev_malloc_create", 00:20:02.804 "params": { 00:20:02.804 "name": "malloc0", 00:20:02.804 "num_blocks": 8192, 00:20:02.804 "block_size": 4096, 00:20:02.804 "physical_block_size": 4096, 00:20:02.804 "uuid": "286d986c-bbfa-421a-af83-ca6d568d9b2c", 00:20:02.804 "optimal_io_boundary": 0, 00:20:02.804 "md_size": 0, 00:20:02.804 "dif_type": 0, 00:20:02.804 "dif_is_head_of_md": false, 00:20:02.804 "dif_pi_format": 0 00:20:02.804 } 00:20:02.804 }, 00:20:02.804 { 00:20:02.804 "method": "bdev_wait_for_examine" 00:20:02.804 } 00:20:02.804 ] 00:20:02.804 }, 00:20:02.804 { 00:20:02.804 "subsystem": "nbd", 00:20:02.804 "config": [] 00:20:02.804 }, 00:20:02.804 { 00:20:02.804 "subsystem": "scheduler", 00:20:02.804 "config": [ 00:20:02.804 { 00:20:02.804 "method": "framework_set_scheduler", 00:20:02.804 "params": { 00:20:02.804 "name": "static" 00:20:02.804 } 00:20:02.804 } 00:20:02.804 ] 00:20:02.804 }, 00:20:02.804 { 00:20:02.804 "subsystem": "nvmf", 00:20:02.804 "config": [ 00:20:02.804 { 00:20:02.804 "method": "nvmf_set_config", 00:20:02.804 "params": { 00:20:02.804 "discovery_filter": "match_any", 00:20:02.804 "admin_cmd_passthru": { 00:20:02.804 "identify_ctrlr": false 00:20:02.804 }, 00:20:02.804 "dhchap_digests": [ 00:20:02.804 "sha256", 00:20:02.804 "sha384", 00:20:02.804 "sha512" 00:20:02.804 ], 00:20:02.804 "dhchap_dhgroups": [ 00:20:02.804 "null", 00:20:02.804 "ffdhe2048", 00:20:02.804 "ffdhe3072", 00:20:02.804 "ffdhe4096", 00:20:02.804 "ffdhe6144", 00:20:02.804 "ffdhe8192" 00:20:02.804 ] 00:20:02.804 } 00:20:02.804 }, 00:20:02.804 { 00:20:02.804 "method": "nvmf_set_max_subsystems", 00:20:02.804 "params": { 00:20:02.804 "max_subsystems": 1024 00:20:02.804 } 00:20:02.804 }, 00:20:02.804 { 00:20:02.804 "method": "nvmf_set_crdt", 00:20:02.804 "params": { 00:20:02.804 "crdt1": 0, 00:20:02.804 "crdt2": 0, 00:20:02.804 "crdt3": 0 00:20:02.804 } 00:20:02.804 }, 00:20:02.804 { 00:20:02.804 "method": "nvmf_create_transport", 00:20:02.804 "params": { 00:20:02.804 "trtype": "TCP", 00:20:02.804 "max_queue_depth": 128, 00:20:02.804 "max_io_qpairs_per_ctrlr": 127, 00:20:02.804 "in_capsule_data_size": 4096, 00:20:02.804 "max_io_size": 131072, 00:20:02.804 "io_unit_size": 131072, 00:20:02.804 "max_aq_depth": 128, 00:20:02.804 "num_shared_buffers": 511, 00:20:02.804 "buf_cache_size": 4294967295, 00:20:02.804 "dif_insert_or_strip": false, 00:20:02.804 "zcopy": false, 00:20:02.804 "c2h_success": false, 00:20:02.804 "sock_priority": 0, 00:20:02.804 "abort_timeout_sec": 1, 00:20:02.804 "ack_timeout": 0, 00:20:02.804 "data_wr_pool_size": 0 00:20:02.804 } 00:20:02.804 }, 00:20:02.804 { 00:20:02.804 "method": "nvmf_create_subsystem", 00:20:02.804 "params": { 00:20:02.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.804 "allow_any_host": false, 00:20:02.804 "serial_number": "SPDK00000000000001", 00:20:02.804 "model_number": "SPDK bdev Controller", 00:20:02.804 "max_namespaces": 10, 00:20:02.804 "min_cntlid": 1, 00:20:02.804 "max_cntlid": 65519, 00:20:02.804 "ana_reporting": false 00:20:02.804 } 00:20:02.804 }, 00:20:02.804 { 00:20:02.804 "method": "nvmf_subsystem_add_host", 00:20:02.804 "params": { 00:20:02.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.804 "host": "nqn.2016-06.io.spdk:host1", 00:20:02.804 "psk": "key0" 00:20:02.804 } 00:20:02.804 }, 00:20:02.804 { 00:20:02.804 "method": "nvmf_subsystem_add_ns", 00:20:02.804 "params": { 00:20:02.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.804 "namespace": { 00:20:02.804 "nsid": 1, 00:20:02.804 "bdev_name": "malloc0", 00:20:02.804 "nguid": "286D986CBBFA421AAF83CA6D568D9B2C", 00:20:02.804 "uuid": "286d986c-bbfa-421a-af83-ca6d568d9b2c", 00:20:02.804 "no_auto_visible": false 00:20:02.804 } 00:20:02.804 } 00:20:02.804 }, 00:20:02.804 { 00:20:02.804 "method": "nvmf_subsystem_add_listener", 00:20:02.804 "params": { 00:20:02.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.804 "listen_address": { 00:20:02.804 "trtype": "TCP", 00:20:02.804 "adrfam": "IPv4", 00:20:02.804 "traddr": "10.0.0.2", 00:20:02.804 "trsvcid": "4420" 00:20:02.804 }, 00:20:02.804 "secure_channel": true 00:20:02.804 } 00:20:02.804 } 00:20:02.804 ] 00:20:02.804 } 00:20:02.804 ] 00:20:02.804 }' 00:20:02.804 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:03.066 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:03.066 "subsystems": [ 00:20:03.066 { 00:20:03.066 "subsystem": "keyring", 00:20:03.066 "config": [ 00:20:03.066 { 00:20:03.066 "method": "keyring_file_add_key", 00:20:03.066 "params": { 00:20:03.066 "name": "key0", 00:20:03.066 "path": "/tmp/tmp.LespPu6TWf" 00:20:03.066 } 00:20:03.066 } 00:20:03.066 ] 00:20:03.066 }, 00:20:03.066 { 00:20:03.066 "subsystem": "iobuf", 00:20:03.066 "config": [ 00:20:03.066 { 00:20:03.066 "method": "iobuf_set_options", 00:20:03.066 "params": { 00:20:03.066 "small_pool_count": 8192, 00:20:03.066 "large_pool_count": 1024, 00:20:03.066 "small_bufsize": 8192, 00:20:03.066 "large_bufsize": 135168, 00:20:03.066 "enable_numa": false 00:20:03.066 } 00:20:03.066 } 00:20:03.066 ] 00:20:03.066 }, 00:20:03.066 { 00:20:03.066 "subsystem": "sock", 00:20:03.066 "config": [ 00:20:03.066 { 00:20:03.066 "method": "sock_set_default_impl", 00:20:03.066 "params": { 00:20:03.066 "impl_name": "posix" 00:20:03.066 } 00:20:03.066 }, 00:20:03.066 { 00:20:03.066 "method": "sock_impl_set_options", 00:20:03.066 "params": { 00:20:03.066 "impl_name": "ssl", 00:20:03.066 "recv_buf_size": 4096, 00:20:03.066 "send_buf_size": 4096, 00:20:03.066 "enable_recv_pipe": true, 00:20:03.066 "enable_quickack": false, 00:20:03.066 "enable_placement_id": 0, 00:20:03.066 "enable_zerocopy_send_server": true, 00:20:03.066 "enable_zerocopy_send_client": false, 00:20:03.066 "zerocopy_threshold": 0, 00:20:03.066 "tls_version": 0, 00:20:03.066 "enable_ktls": false 00:20:03.066 } 00:20:03.066 }, 00:20:03.066 { 00:20:03.066 "method": "sock_impl_set_options", 00:20:03.066 "params": { 00:20:03.066 "impl_name": "posix", 00:20:03.066 "recv_buf_size": 2097152, 00:20:03.066 "send_buf_size": 2097152, 00:20:03.066 "enable_recv_pipe": true, 00:20:03.066 "enable_quickack": false, 00:20:03.066 "enable_placement_id": 0, 00:20:03.066 "enable_zerocopy_send_server": true, 00:20:03.066 "enable_zerocopy_send_client": false, 00:20:03.066 "zerocopy_threshold": 0, 00:20:03.066 "tls_version": 0, 00:20:03.066 "enable_ktls": false 00:20:03.066 } 00:20:03.066 } 00:20:03.066 ] 00:20:03.066 }, 00:20:03.066 { 00:20:03.066 "subsystem": "vmd", 00:20:03.066 "config": [] 00:20:03.066 }, 00:20:03.066 { 00:20:03.066 "subsystem": "accel", 00:20:03.066 "config": [ 00:20:03.066 { 00:20:03.066 "method": "accel_set_options", 00:20:03.066 "params": { 00:20:03.066 "small_cache_size": 128, 00:20:03.066 "large_cache_size": 16, 00:20:03.066 "task_count": 2048, 00:20:03.066 "sequence_count": 2048, 00:20:03.066 "buf_count": 2048 00:20:03.066 } 00:20:03.066 } 00:20:03.066 ] 00:20:03.066 }, 00:20:03.066 { 00:20:03.066 "subsystem": "bdev", 00:20:03.066 "config": [ 00:20:03.066 { 00:20:03.066 "method": "bdev_set_options", 00:20:03.066 "params": { 00:20:03.066 "bdev_io_pool_size": 65535, 00:20:03.066 "bdev_io_cache_size": 256, 00:20:03.066 "bdev_auto_examine": true, 00:20:03.066 "iobuf_small_cache_size": 128, 00:20:03.066 "iobuf_large_cache_size": 16 00:20:03.066 } 00:20:03.066 }, 00:20:03.066 { 00:20:03.066 "method": "bdev_raid_set_options", 00:20:03.066 "params": { 00:20:03.066 "process_window_size_kb": 1024, 00:20:03.066 "process_max_bandwidth_mb_sec": 0 00:20:03.066 } 00:20:03.066 }, 00:20:03.066 { 00:20:03.066 "method": "bdev_iscsi_set_options", 00:20:03.066 "params": { 00:20:03.066 "timeout_sec": 30 00:20:03.066 } 00:20:03.066 }, 00:20:03.066 { 00:20:03.066 "method": "bdev_nvme_set_options", 00:20:03.066 "params": { 00:20:03.066 "action_on_timeout": "none", 00:20:03.066 "timeout_us": 0, 00:20:03.066 "timeout_admin_us": 0, 00:20:03.066 "keep_alive_timeout_ms": 10000, 00:20:03.066 "arbitration_burst": 0, 00:20:03.066 "low_priority_weight": 0, 00:20:03.066 "medium_priority_weight": 0, 00:20:03.066 "high_priority_weight": 0, 00:20:03.066 "nvme_adminq_poll_period_us": 10000, 00:20:03.066 "nvme_ioq_poll_period_us": 0, 00:20:03.066 "io_queue_requests": 512, 00:20:03.066 "delay_cmd_submit": true, 00:20:03.066 "transport_retry_count": 4, 00:20:03.066 "bdev_retry_count": 3, 00:20:03.066 "transport_ack_timeout": 0, 00:20:03.066 "ctrlr_loss_timeout_sec": 0, 00:20:03.066 "reconnect_delay_sec": 0, 00:20:03.066 "fast_io_fail_timeout_sec": 0, 00:20:03.066 "disable_auto_failback": false, 00:20:03.066 "generate_uuids": false, 00:20:03.066 "transport_tos": 0, 00:20:03.066 "nvme_error_stat": false, 00:20:03.066 "rdma_srq_size": 0, 00:20:03.066 "io_path_stat": false, 00:20:03.066 "allow_accel_sequence": false, 00:20:03.066 "rdma_max_cq_size": 0, 00:20:03.066 "rdma_cm_event_timeout_ms": 0, 00:20:03.066 "dhchap_digests": [ 00:20:03.066 "sha256", 00:20:03.066 "sha384", 00:20:03.066 "sha512" 00:20:03.066 ], 00:20:03.066 "dhchap_dhgroups": [ 00:20:03.066 "null", 00:20:03.066 "ffdhe2048", 00:20:03.066 "ffdhe3072", 00:20:03.066 "ffdhe4096", 00:20:03.066 "ffdhe6144", 00:20:03.066 "ffdhe8192" 00:20:03.066 ] 00:20:03.066 } 00:20:03.066 }, 00:20:03.066 { 00:20:03.066 "method": "bdev_nvme_attach_controller", 00:20:03.066 "params": { 00:20:03.066 "name": "TLSTEST", 00:20:03.066 "trtype": "TCP", 00:20:03.066 "adrfam": "IPv4", 00:20:03.066 "traddr": "10.0.0.2", 00:20:03.066 "trsvcid": "4420", 00:20:03.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.067 "prchk_reftag": false, 00:20:03.067 "prchk_guard": false, 00:20:03.067 "ctrlr_loss_timeout_sec": 0, 00:20:03.067 "reconnect_delay_sec": 0, 00:20:03.067 "fast_io_fail_timeout_sec": 0, 00:20:03.067 "psk": "key0", 00:20:03.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.067 "hdgst": false, 00:20:03.067 "ddgst": false, 00:20:03.067 "multipath": "multipath" 00:20:03.067 } 00:20:03.067 }, 00:20:03.067 { 00:20:03.067 "method": "bdev_nvme_set_hotplug", 00:20:03.067 "params": { 00:20:03.067 "period_us": 100000, 00:20:03.067 "enable": false 00:20:03.067 } 00:20:03.067 }, 00:20:03.067 { 00:20:03.067 "method": "bdev_wait_for_examine" 00:20:03.067 } 00:20:03.067 ] 00:20:03.067 }, 00:20:03.067 { 00:20:03.067 "subsystem": "nbd", 00:20:03.067 "config": [] 00:20:03.067 } 00:20:03.067 ] 00:20:03.067 }' 00:20:03.067 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3446633 00:20:03.067 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3446633 ']' 00:20:03.067 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3446633 00:20:03.067 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.067 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.067 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446633 00:20:03.067 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:03.067 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:03.067 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446633' 00:20:03.067 killing process with pid 3446633 00:20:03.067 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3446633 00:20:03.067 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.067 00:20:03.067 Latency(us) 00:20:03.067 [2024-12-06T10:19:09.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.067 [2024-12-06T10:19:09.234Z] =================================================================================================================== 00:20:03.067 [2024-12-06T10:19:09.234Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.067 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3446633 00:20:03.328 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3446311 00:20:03.328 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3446311 ']' 00:20:03.328 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3446311 00:20:03.328 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.328 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.328 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446311 00:20:03.328 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:03.328 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:03.328 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446311' 00:20:03.328 killing process with pid 3446311 00:20:03.328 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3446311 00:20:03.328 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3446311 00:20:03.590 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:03.590 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.590 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.590 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.590 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:03.590 "subsystems": [ 00:20:03.590 { 00:20:03.590 "subsystem": "keyring", 00:20:03.590 "config": [ 00:20:03.590 { 00:20:03.590 "method": "keyring_file_add_key", 00:20:03.590 "params": { 00:20:03.590 "name": "key0", 00:20:03.590 "path": "/tmp/tmp.LespPu6TWf" 00:20:03.590 } 00:20:03.590 } 00:20:03.590 ] 00:20:03.590 }, 00:20:03.590 { 00:20:03.590 "subsystem": "iobuf", 00:20:03.590 "config": [ 00:20:03.590 { 00:20:03.590 "method": "iobuf_set_options", 00:20:03.591 "params": { 00:20:03.591 "small_pool_count": 8192, 00:20:03.591 "large_pool_count": 1024, 00:20:03.591 "small_bufsize": 8192, 00:20:03.591 "large_bufsize": 135168, 00:20:03.591 "enable_numa": false 00:20:03.591 } 00:20:03.591 } 00:20:03.591 ] 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "subsystem": "sock", 00:20:03.591 "config": [ 00:20:03.591 { 00:20:03.591 "method": "sock_set_default_impl", 00:20:03.591 "params": { 00:20:03.591 "impl_name": "posix" 00:20:03.591 } 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "method": "sock_impl_set_options", 00:20:03.591 "params": { 00:20:03.591 "impl_name": "ssl", 00:20:03.591 "recv_buf_size": 4096, 00:20:03.591 "send_buf_size": 4096, 00:20:03.591 "enable_recv_pipe": true, 00:20:03.591 "enable_quickack": false, 00:20:03.591 "enable_placement_id": 0, 00:20:03.591 "enable_zerocopy_send_server": true, 00:20:03.591 "enable_zerocopy_send_client": false, 00:20:03.591 "zerocopy_threshold": 0, 00:20:03.591 "tls_version": 0, 00:20:03.591 "enable_ktls": false 00:20:03.591 } 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "method": "sock_impl_set_options", 00:20:03.591 "params": { 00:20:03.591 "impl_name": "posix", 00:20:03.591 "recv_buf_size": 2097152, 00:20:03.591 "send_buf_size": 2097152, 00:20:03.591 "enable_recv_pipe": true, 00:20:03.591 "enable_quickack": false, 00:20:03.591 "enable_placement_id": 0, 00:20:03.591 "enable_zerocopy_send_server": true, 00:20:03.591 "enable_zerocopy_send_client": false, 00:20:03.591 "zerocopy_threshold": 0, 00:20:03.591 "tls_version": 0, 00:20:03.591 "enable_ktls": false 00:20:03.591 } 00:20:03.591 } 00:20:03.591 ] 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "subsystem": "vmd", 00:20:03.591 "config": [] 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "subsystem": "accel", 00:20:03.591 "config": [ 00:20:03.591 { 00:20:03.591 "method": "accel_set_options", 00:20:03.591 "params": { 00:20:03.591 "small_cache_size": 128, 00:20:03.591 "large_cache_size": 16, 00:20:03.591 "task_count": 2048, 00:20:03.591 "sequence_count": 2048, 00:20:03.591 "buf_count": 2048 00:20:03.591 } 00:20:03.591 } 00:20:03.591 ] 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "subsystem": "bdev", 00:20:03.591 "config": [ 00:20:03.591 { 00:20:03.591 "method": "bdev_set_options", 00:20:03.591 "params": { 00:20:03.591 "bdev_io_pool_size": 65535, 00:20:03.591 "bdev_io_cache_size": 256, 00:20:03.591 "bdev_auto_examine": true, 00:20:03.591 "iobuf_small_cache_size": 128, 00:20:03.591 "iobuf_large_cache_size": 16 00:20:03.591 } 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "method": "bdev_raid_set_options", 00:20:03.591 "params": { 00:20:03.591 "process_window_size_kb": 1024, 00:20:03.591 "process_max_bandwidth_mb_sec": 0 00:20:03.591 } 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "method": "bdev_iscsi_set_options", 00:20:03.591 "params": { 00:20:03.591 "timeout_sec": 30 00:20:03.591 } 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "method": "bdev_nvme_set_options", 00:20:03.591 "params": { 00:20:03.591 "action_on_timeout": "none", 00:20:03.591 "timeout_us": 0, 00:20:03.591 "timeout_admin_us": 0, 00:20:03.591 "keep_alive_timeout_ms": 10000, 00:20:03.591 "arbitration_burst": 0, 00:20:03.591 "low_priority_weight": 0, 00:20:03.591 "medium_priority_weight": 0, 00:20:03.591 "high_priority_weight": 0, 00:20:03.591 "nvme_adminq_poll_period_us": 10000, 00:20:03.591 "nvme_ioq_poll_period_us": 0, 00:20:03.591 "io_queue_requests": 0, 00:20:03.591 "delay_cmd_submit": true, 00:20:03.591 "transport_retry_count": 4, 00:20:03.591 "bdev_retry_count": 3, 00:20:03.591 "transport_ack_timeout": 0, 00:20:03.591 "ctrlr_loss_timeout_sec": 0, 00:20:03.591 "reconnect_delay_sec": 0, 00:20:03.591 "fast_io_fail_timeout_sec": 0, 00:20:03.591 "disable_auto_failback": false, 00:20:03.591 "generate_uuids": false, 00:20:03.591 "transport_tos": 0, 00:20:03.591 "nvme_error_stat": false, 00:20:03.591 "rdma_srq_size": 0, 00:20:03.591 "io_path_stat": false, 00:20:03.591 "allow_accel_sequence": false, 00:20:03.591 "rdma_max_cq_size": 0, 00:20:03.591 "rdma_cm_event_timeout_ms": 0, 00:20:03.591 "dhchap_digests": [ 00:20:03.591 "sha256", 00:20:03.591 "sha384", 00:20:03.591 "sha512" 00:20:03.591 ], 00:20:03.591 "dhchap_dhgroups": [ 00:20:03.591 "null", 00:20:03.591 "ffdhe2048", 00:20:03.591 "ffdhe3072", 00:20:03.591 "ffdhe4096", 00:20:03.591 "ffdhe6144", 00:20:03.591 "ffdhe8192" 00:20:03.591 ] 00:20:03.591 } 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "method": "bdev_nvme_set_hotplug", 00:20:03.591 "params": { 00:20:03.591 "period_us": 100000, 00:20:03.591 "enable": false 00:20:03.591 } 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "method": "bdev_malloc_create", 00:20:03.591 "params": { 00:20:03.591 "name": "malloc0", 00:20:03.591 "num_blocks": 8192, 00:20:03.591 "block_size": 4096, 00:20:03.591 "physical_block_size": 4096, 00:20:03.591 "uuid": "286d986c-bbfa-421a-af83-ca6d568d9b2c", 00:20:03.591 "optimal_io_boundary": 0, 00:20:03.591 "md_size": 0, 00:20:03.591 "dif_type": 0, 00:20:03.591 "dif_is_head_of_md": false, 00:20:03.591 "dif_pi_format": 0 00:20:03.591 } 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "method": "bdev_wait_for_examine" 00:20:03.591 } 00:20:03.591 ] 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "subsystem": "nbd", 00:20:03.591 "config": [] 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "subsystem": "scheduler", 00:20:03.591 "config": [ 00:20:03.591 { 00:20:03.591 "method": "framework_set_scheduler", 00:20:03.591 "params": { 00:20:03.591 "name": "static" 00:20:03.591 } 00:20:03.591 } 00:20:03.591 ] 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "subsystem": "nvmf", 00:20:03.591 "config": [ 00:20:03.591 { 00:20:03.591 "method": "nvmf_set_config", 00:20:03.591 "params": { 00:20:03.591 "discovery_filter": "match_any", 00:20:03.591 "admin_cmd_passthru": { 00:20:03.591 "identify_ctrlr": false 00:20:03.591 }, 00:20:03.591 "dhchap_digests": [ 00:20:03.591 "sha256", 00:20:03.591 "sha384", 00:20:03.591 "sha512" 00:20:03.591 ], 00:20:03.591 "dhchap_dhgroups": [ 00:20:03.591 "null", 00:20:03.591 "ffdhe2048", 00:20:03.591 "ffdhe3072", 00:20:03.591 "ffdhe4096", 00:20:03.591 "ffdhe6144", 00:20:03.591 "ffdhe8192" 00:20:03.591 ] 00:20:03.591 } 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "method": "nvmf_set_max_subsystems", 00:20:03.591 "params": { 00:20:03.591 "max_subsystems": 1024 00:20:03.591 } 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "method": "nvmf_set_crdt", 00:20:03.591 "params": { 00:20:03.591 "crdt1": 0, 00:20:03.591 "crdt2": 0, 00:20:03.591 "crdt3": 0 00:20:03.591 } 00:20:03.591 }, 00:20:03.591 { 00:20:03.591 "method": "nvmf_create_transport", 00:20:03.591 "params": { 00:20:03.591 "trtype": "TCP", 00:20:03.591 "max_queue_depth": 128, 00:20:03.591 "max_io_qpairs_per_ctrlr": 127, 00:20:03.591 "in_capsule_data_size": 4096, 00:20:03.591 "max_io_size": 131072, 00:20:03.591 "io_unit_size": 131072, 00:20:03.591 "max_aq_depth": 128, 00:20:03.591 "num_shared_buffers": 511, 00:20:03.591 "buf_cache_size": 4294967295, 00:20:03.591 "dif_insert_or_strip": false, 00:20:03.591 "zcopy": false, 00:20:03.592 "c2h_success": false, 00:20:03.592 "sock_priority": 0, 00:20:03.592 "abort_timeout_sec": 1, 00:20:03.592 "ack_timeout": 0, 00:20:03.592 "data_wr_pool_size": 0 00:20:03.592 } 00:20:03.592 }, 00:20:03.592 { 00:20:03.592 "method": "nvmf_create_subsystem", 00:20:03.592 "params": { 00:20:03.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.592 "allow_any_host": false, 00:20:03.592 "serial_number": "SPDK00000000000001", 00:20:03.592 "model_number": "SPDK bdev Controller", 00:20:03.592 "max_namespaces": 10, 00:20:03.592 "min_cntlid": 1, 00:20:03.592 "max_cntlid": 65519, 00:20:03.592 "ana_reporting": false 00:20:03.592 } 00:20:03.592 }, 00:20:03.592 { 00:20:03.592 "method": "nvmf_subsystem_add_host", 00:20:03.592 "params": { 00:20:03.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.592 "host": "nqn.2016-06.io.spdk:host1", 00:20:03.592 "psk": "key0" 00:20:03.592 } 00:20:03.592 }, 00:20:03.592 { 00:20:03.592 "method": "nvmf_subsystem_add_ns", 00:20:03.592 "params": { 00:20:03.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.592 "namespace": { 00:20:03.592 "nsid": 1, 00:20:03.592 "bdev_name": "malloc0", 00:20:03.592 "nguid": "286D986CBBFA421AAF83CA6D568D9B2C", 00:20:03.592 "uuid": "286d986c-bbfa-421a-af83-ca6d568d9b2c", 00:20:03.592 "no_auto_visible": false 00:20:03.592 } 00:20:03.592 } 00:20:03.592 }, 00:20:03.592 { 00:20:03.592 "method": "nvmf_subsystem_add_listener", 00:20:03.592 "params": { 00:20:03.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.592 "listen_address": { 00:20:03.592 "trtype": "TCP", 00:20:03.592 "adrfam": "IPv4", 00:20:03.592 "traddr": "10.0.0.2", 00:20:03.592 "trsvcid": "4420" 00:20:03.592 }, 00:20:03.592 "secure_channel": true 00:20:03.592 } 00:20:03.592 } 00:20:03.592 ] 00:20:03.592 } 00:20:03.592 ] 00:20:03.592 }' 00:20:03.592 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3446878 00:20:03.592 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3446878 00:20:03.592 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:03.592 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3446878 ']' 00:20:03.592 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.592 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.592 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.592 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.592 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.592 [2024-12-06 11:19:09.570251] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:03.592 [2024-12-06 11:19:09.570309] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.592 [2024-12-06 11:19:09.669024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.592 [2024-12-06 11:19:09.698040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.592 [2024-12-06 11:19:09.698068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.592 [2024-12-06 11:19:09.698074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.592 [2024-12-06 11:19:09.698079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.592 [2024-12-06 11:19:09.698083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.592 [2024-12-06 11:19:09.698568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.854 [2024-12-06 11:19:09.892612] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.854 [2024-12-06 11:19:09.924625] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.854 [2024-12-06 11:19:09.924820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3447046 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3447046 /var/tmp/bdevperf.sock 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3447046 ']' 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.427 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:04.427 "subsystems": [ 00:20:04.427 { 00:20:04.427 "subsystem": "keyring", 00:20:04.427 "config": [ 00:20:04.427 { 00:20:04.427 "method": "keyring_file_add_key", 00:20:04.427 "params": { 00:20:04.427 "name": "key0", 00:20:04.427 "path": "/tmp/tmp.LespPu6TWf" 00:20:04.427 } 00:20:04.427 } 00:20:04.427 ] 00:20:04.427 }, 00:20:04.427 { 00:20:04.427 "subsystem": "iobuf", 00:20:04.427 "config": [ 00:20:04.427 { 00:20:04.427 "method": "iobuf_set_options", 00:20:04.427 "params": { 00:20:04.427 "small_pool_count": 8192, 00:20:04.427 "large_pool_count": 1024, 00:20:04.427 "small_bufsize": 8192, 00:20:04.427 "large_bufsize": 135168, 00:20:04.427 "enable_numa": false 00:20:04.427 } 00:20:04.427 } 00:20:04.427 ] 00:20:04.427 }, 00:20:04.427 { 00:20:04.427 "subsystem": "sock", 00:20:04.427 "config": [ 00:20:04.427 { 00:20:04.427 "method": "sock_set_default_impl", 00:20:04.427 "params": { 00:20:04.427 "impl_name": "posix" 00:20:04.427 } 00:20:04.427 }, 00:20:04.427 { 00:20:04.427 "method": "sock_impl_set_options", 00:20:04.427 "params": { 00:20:04.427 "impl_name": "ssl", 00:20:04.427 "recv_buf_size": 4096, 00:20:04.427 "send_buf_size": 4096, 00:20:04.427 "enable_recv_pipe": true, 00:20:04.427 "enable_quickack": false, 00:20:04.427 "enable_placement_id": 0, 00:20:04.427 "enable_zerocopy_send_server": true, 00:20:04.427 "enable_zerocopy_send_client": false, 00:20:04.427 "zerocopy_threshold": 0, 00:20:04.427 "tls_version": 0, 00:20:04.427 "enable_ktls": false 00:20:04.427 } 00:20:04.427 }, 00:20:04.427 { 00:20:04.427 "method": "sock_impl_set_options", 00:20:04.427 "params": { 00:20:04.427 "impl_name": "posix", 00:20:04.427 "recv_buf_size": 2097152, 00:20:04.427 "send_buf_size": 2097152, 00:20:04.427 "enable_recv_pipe": true, 00:20:04.427 "enable_quickack": false, 00:20:04.427 "enable_placement_id": 0, 00:20:04.427 "enable_zerocopy_send_server": true, 00:20:04.427 "enable_zerocopy_send_client": false, 00:20:04.427 "zerocopy_threshold": 0, 00:20:04.427 "tls_version": 0, 00:20:04.427 "enable_ktls": false 00:20:04.427 } 00:20:04.427 } 00:20:04.427 ] 00:20:04.427 }, 00:20:04.427 { 00:20:04.427 "subsystem": "vmd", 00:20:04.427 "config": [] 00:20:04.427 }, 00:20:04.427 { 00:20:04.427 "subsystem": "accel", 00:20:04.427 "config": [ 00:20:04.427 { 00:20:04.427 "method": "accel_set_options", 00:20:04.427 "params": { 00:20:04.427 "small_cache_size": 128, 00:20:04.427 "large_cache_size": 16, 00:20:04.427 "task_count": 2048, 00:20:04.427 "sequence_count": 2048, 00:20:04.427 "buf_count": 2048 00:20:04.427 } 00:20:04.427 } 00:20:04.427 ] 00:20:04.427 }, 00:20:04.427 { 00:20:04.427 "subsystem": "bdev", 00:20:04.427 "config": [ 00:20:04.427 { 00:20:04.427 "method": "bdev_set_options", 00:20:04.427 "params": { 00:20:04.427 "bdev_io_pool_size": 65535, 00:20:04.427 "bdev_io_cache_size": 256, 00:20:04.427 "bdev_auto_examine": true, 00:20:04.427 "iobuf_small_cache_size": 128, 00:20:04.427 "iobuf_large_cache_size": 16 00:20:04.427 } 00:20:04.427 }, 00:20:04.427 { 00:20:04.427 "method": "bdev_raid_set_options", 00:20:04.427 "params": { 00:20:04.427 "process_window_size_kb": 1024, 00:20:04.427 "process_max_bandwidth_mb_sec": 0 00:20:04.427 } 00:20:04.427 }, 00:20:04.427 { 00:20:04.427 "method": "bdev_iscsi_set_options", 00:20:04.427 "params": { 00:20:04.427 "timeout_sec": 30 00:20:04.427 } 00:20:04.427 }, 00:20:04.427 { 00:20:04.427 "method": "bdev_nvme_set_options", 00:20:04.427 "params": { 00:20:04.427 "action_on_timeout": "none", 00:20:04.427 "timeout_us": 0, 00:20:04.427 "timeout_admin_us": 0, 00:20:04.427 "keep_alive_timeout_ms": 10000, 00:20:04.427 "arbitration_burst": 0, 00:20:04.427 "low_priority_weight": 0, 00:20:04.427 "medium_priority_weight": 0, 00:20:04.427 "high_priority_weight": 0, 00:20:04.427 "nvme_adminq_poll_period_us": 10000, 00:20:04.427 "nvme_ioq_poll_period_us": 0, 00:20:04.427 "io_queue_requests": 512, 00:20:04.427 "delay_cmd_submit": true, 00:20:04.427 "transport_retry_count": 4, 00:20:04.427 "bdev_retry_count": 3, 00:20:04.427 "transport_ack_timeout": 0, 00:20:04.427 "ctrlr_loss_timeout_sec": 0, 00:20:04.427 "reconnect_delay_sec": 0, 00:20:04.427 "fast_io_fail_timeout_sec": 0, 00:20:04.427 "disable_auto_failback": false, 00:20:04.427 "generate_uuids": false, 00:20:04.427 "transport_tos": 0, 00:20:04.427 "nvme_error_stat": false, 00:20:04.427 "rdma_srq_size": 0, 00:20:04.427 "io_path_stat": false, 00:20:04.427 "allow_accel_sequence": false, 00:20:04.427 "rdma_max_cq_size": 0, 00:20:04.427 "rdma_cm_event_timeout_ms": 0, 00:20:04.427 "dhchap_digests": [ 00:20:04.427 "sha256", 00:20:04.427 "sha384", 00:20:04.427 "sha512" 00:20:04.427 ], 00:20:04.427 "dhchap_dhgroups": [ 00:20:04.427 "null", 00:20:04.427 "ffdhe2048", 00:20:04.427 "ffdhe3072", 00:20:04.427 "ffdhe4096", 00:20:04.428 "ffdhe6144", 00:20:04.428 "ffdhe8192" 00:20:04.428 ] 00:20:04.428 } 00:20:04.428 }, 00:20:04.428 { 00:20:04.428 "method": "bdev_nvme_attach_controller", 00:20:04.428 "params": { 00:20:04.428 "name": "TLSTEST", 00:20:04.428 "trtype": "TCP", 00:20:04.428 "adrfam": "IPv4", 00:20:04.428 "traddr": "10.0.0.2", 00:20:04.428 "trsvcid": "4420", 00:20:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.428 "prchk_reftag": false, 00:20:04.428 "prchk_guard": false, 00:20:04.428 "ctrlr_loss_timeout_sec": 0, 00:20:04.428 "reconnect_delay_sec": 0, 00:20:04.428 "fast_io_fail_timeout_sec": 0, 00:20:04.428 "psk": "key0", 00:20:04.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.428 "hdgst": false, 00:20:04.428 "ddgst": false, 00:20:04.428 "multipath": "multipath" 00:20:04.428 } 00:20:04.428 }, 00:20:04.428 { 00:20:04.428 "method": "bdev_nvme_set_hotplug", 00:20:04.428 "params": { 00:20:04.428 "period_us": 100000, 00:20:04.428 "enable": false 00:20:04.428 } 00:20:04.428 }, 00:20:04.428 { 00:20:04.428 "method": "bdev_wait_for_examine" 00:20:04.428 } 00:20:04.428 ] 00:20:04.428 }, 00:20:04.428 { 00:20:04.428 "subsystem": "nbd", 00:20:04.428 "config": [] 00:20:04.428 } 00:20:04.428 ] 00:20:04.428 }' 00:20:04.428 [2024-12-06 11:19:10.459653] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:04.428 [2024-12-06 11:19:10.459708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447046 ] 00:20:04.428 [2024-12-06 11:19:10.523992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.428 [2024-12-06 11:19:10.553277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.690 [2024-12-06 11:19:10.688557] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.262 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.262 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:05.262 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:05.262 Running I/O for 10 seconds... 00:20:07.587 5005.00 IOPS, 19.55 MiB/s [2024-12-06T10:19:14.694Z] 5571.50 IOPS, 21.76 MiB/s [2024-12-06T10:19:15.645Z] 5585.67 IOPS, 21.82 MiB/s [2024-12-06T10:19:16.593Z] 5577.00 IOPS, 21.79 MiB/s [2024-12-06T10:19:17.533Z] 5550.60 IOPS, 21.68 MiB/s [2024-12-06T10:19:18.475Z] 5642.50 IOPS, 22.04 MiB/s [2024-12-06T10:19:19.417Z] 5617.71 IOPS, 21.94 MiB/s [2024-12-06T10:19:20.360Z] 5569.62 IOPS, 21.76 MiB/s [2024-12-06T10:19:21.743Z] 5474.00 IOPS, 21.38 MiB/s [2024-12-06T10:19:21.743Z] 5442.60 IOPS, 21.26 MiB/s 00:20:15.576 Latency(us) 00:20:15.576 [2024-12-06T10:19:21.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.576 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:15.576 Verification LBA range: start 0x0 length 0x2000 00:20:15.576 TLSTESTn1 : 10.01 5447.17 21.28 0.00 0.00 23466.71 6034.77 79517.01 00:20:15.576 [2024-12-06T10:19:21.743Z] =================================================================================================================== 00:20:15.576 [2024-12-06T10:19:21.743Z] Total : 5447.17 21.28 0.00 0.00 23466.71 6034.77 79517.01 00:20:15.576 { 00:20:15.576 "results": [ 00:20:15.576 { 00:20:15.576 "job": "TLSTESTn1", 00:20:15.576 "core_mask": "0x4", 00:20:15.576 "workload": "verify", 00:20:15.576 "status": "finished", 00:20:15.576 "verify_range": { 00:20:15.576 "start": 0, 00:20:15.576 "length": 8192 00:20:15.576 }, 00:20:15.576 "queue_depth": 128, 00:20:15.576 "io_size": 4096, 00:20:15.576 "runtime": 10.014922, 00:20:15.576 "iops": 5447.171730343981, 00:20:15.576 "mibps": 21.278014571656175, 00:20:15.576 "io_failed": 0, 00:20:15.576 "io_timeout": 0, 00:20:15.576 "avg_latency_us": 23466.705522091663, 00:20:15.576 "min_latency_us": 6034.7733333333335, 00:20:15.576 "max_latency_us": 79517.01333333334 00:20:15.576 } 00:20:15.576 ], 00:20:15.576 "core_count": 1 00:20:15.576 } 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3447046 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3447046 ']' 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3447046 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3447046 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3447046' 00:20:15.576 killing process with pid 3447046 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3447046 00:20:15.576 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.576 00:20:15.576 Latency(us) 00:20:15.576 [2024-12-06T10:19:21.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.576 [2024-12-06T10:19:21.743Z] =================================================================================================================== 00:20:15.576 [2024-12-06T10:19:21.743Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3447046 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3446878 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3446878 ']' 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3446878 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446878 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446878' 00:20:15.576 killing process with pid 3446878 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3446878 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3446878 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.576 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.838 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3449316 00:20:15.838 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3449316 00:20:15.838 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:15.838 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3449316 ']' 00:20:15.838 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.838 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.838 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.838 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.838 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.838 [2024-12-06 11:19:21.802582] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:15.838 [2024-12-06 11:19:21.802634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.838 [2024-12-06 11:19:21.887254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.838 [2024-12-06 11:19:21.921078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.838 [2024-12-06 11:19:21.921112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.838 [2024-12-06 11:19:21.921121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.838 [2024-12-06 11:19:21.921128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.838 [2024-12-06 11:19:21.921134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.838 [2024-12-06 11:19:21.921733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.780 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.780 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:16.780 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.780 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.780 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.780 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.780 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.LespPu6TWf 00:20:16.780 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LespPu6TWf 00:20:16.780 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:16.780 [2024-12-06 11:19:22.799366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.780 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.041 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:17.041 [2024-12-06 11:19:23.172311] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.041 [2024-12-06 11:19:23.172549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.041 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:17.302 malloc0 00:20:17.302 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:17.564 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LespPu6TWf 00:20:17.825 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:17.825 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3449759 00:20:17.825 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.825 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:17.825 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3449759 /var/tmp/bdevperf.sock 00:20:17.825 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3449759 ']' 00:20:17.825 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.825 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.825 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.825 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.825 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.825 [2024-12-06 11:19:23.984795] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:17.825 [2024-12-06 11:19:23.984849] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449759 ] 00:20:18.086 [2024-12-06 11:19:24.075595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.086 [2024-12-06 11:19:24.105134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.656 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.656 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:18.656 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LespPu6TWf 00:20:18.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:18.916 [2024-12-06 11:19:25.073803] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.177 nvme0n1 00:20:19.177 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:19.177 Running I/O for 1 seconds... 00:20:20.116 4963.00 IOPS, 19.39 MiB/s 00:20:20.116 Latency(us) 00:20:20.116 [2024-12-06T10:19:26.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.116 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:20.116 Verification LBA range: start 0x0 length 0x2000 00:20:20.116 nvme0n1 : 1.02 4969.31 19.41 0.00 0.00 25519.50 7427.41 25668.27 00:20:20.117 [2024-12-06T10:19:26.284Z] =================================================================================================================== 00:20:20.117 [2024-12-06T10:19:26.284Z] Total : 4969.31 19.41 0.00 0.00 25519.50 7427.41 25668.27 00:20:20.117 { 00:20:20.117 "results": [ 00:20:20.117 { 00:20:20.117 "job": "nvme0n1", 00:20:20.117 "core_mask": "0x2", 00:20:20.117 "workload": "verify", 00:20:20.117 "status": "finished", 00:20:20.117 "verify_range": { 00:20:20.117 "start": 0, 00:20:20.117 "length": 8192 00:20:20.117 }, 00:20:20.117 "queue_depth": 128, 00:20:20.117 "io_size": 4096, 00:20:20.117 "runtime": 1.02469, 00:20:20.117 "iops": 4969.307790648879, 00:20:20.117 "mibps": 19.411358557222183, 00:20:20.117 "io_failed": 0, 00:20:20.117 "io_timeout": 0, 00:20:20.117 "avg_latency_us": 25519.495742340925, 00:20:20.117 "min_latency_us": 7427.413333333333, 00:20:20.117 "max_latency_us": 25668.266666666666 00:20:20.117 } 00:20:20.117 ], 00:20:20.117 "core_count": 1 00:20:20.117 } 00:20:20.117 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3449759 00:20:20.117 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3449759 ']' 00:20:20.117 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3449759 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3449759 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3449759' 00:20:20.377 killing process with pid 3449759 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3449759 00:20:20.377 Received shutdown signal, test time was about 1.000000 seconds 00:20:20.377 00:20:20.377 Latency(us) 00:20:20.377 [2024-12-06T10:19:26.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.377 [2024-12-06T10:19:26.544Z] =================================================================================================================== 00:20:20.377 [2024-12-06T10:19:26.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3449759 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3449316 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3449316 ']' 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3449316 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3449316 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3449316' 00:20:20.377 killing process with pid 3449316 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3449316 00:20:20.377 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3449316 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3450241 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3450241 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3450241 ']' 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.637 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.637 [2024-12-06 11:19:26.717243] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:20.637 [2024-12-06 11:19:26.717300] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.896 [2024-12-06 11:19:26.805546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.896 [2024-12-06 11:19:26.841459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.896 [2024-12-06 11:19:26.841496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.896 [2024-12-06 11:19:26.841504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.896 [2024-12-06 11:19:26.841511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.896 [2024-12-06 11:19:26.841516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.896 [2024-12-06 11:19:26.842085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.465 [2024-12-06 11:19:27.563608] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.465 malloc0 00:20:21.465 [2024-12-06 11:19:27.590313] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.465 [2024-12-06 11:19:27.590537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3450461 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3450461 /var/tmp/bdevperf.sock 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3450461 ']' 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.465 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.466 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.466 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.466 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:21.724 [2024-12-06 11:19:27.670388] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:21.724 [2024-12-06 11:19:27.670435] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450461 ] 00:20:21.724 [2024-12-06 11:19:27.759826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.724 [2024-12-06 11:19:27.789615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.294 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.294 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:22.294 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LespPu6TWf 00:20:22.554 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:22.815 [2024-12-06 11:19:28.770317] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.815 nvme0n1 00:20:22.815 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.815 Running I/O for 1 seconds... 00:20:24.202 5265.00 IOPS, 20.57 MiB/s 00:20:24.202 Latency(us) 00:20:24.202 [2024-12-06T10:19:30.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.202 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:24.202 Verification LBA range: start 0x0 length 0x2000 00:20:24.202 nvme0n1 : 1.02 5287.12 20.65 0.00 0.00 24033.35 4532.91 44564.48 00:20:24.202 [2024-12-06T10:19:30.369Z] =================================================================================================================== 00:20:24.202 [2024-12-06T10:19:30.369Z] Total : 5287.12 20.65 0.00 0.00 24033.35 4532.91 44564.48 00:20:24.202 { 00:20:24.202 "results": [ 00:20:24.202 { 00:20:24.202 "job": "nvme0n1", 00:20:24.202 "core_mask": "0x2", 00:20:24.202 "workload": "verify", 00:20:24.202 "status": "finished", 00:20:24.202 "verify_range": { 00:20:24.202 "start": 0, 00:20:24.202 "length": 8192 00:20:24.202 }, 00:20:24.202 "queue_depth": 128, 00:20:24.202 "io_size": 4096, 00:20:24.202 "runtime": 1.020216, 00:20:24.202 "iops": 5287.115669622904, 00:20:24.202 "mibps": 20.652795584464467, 00:20:24.202 "io_failed": 0, 00:20:24.202 "io_timeout": 0, 00:20:24.202 "avg_latency_us": 24033.347076999133, 00:20:24.202 "min_latency_us": 4532.906666666667, 00:20:24.202 "max_latency_us": 44564.48 00:20:24.202 } 00:20:24.202 ], 00:20:24.202 "core_count": 1 00:20:24.202 } 00:20:24.202 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:24.202 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.202 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.202 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.202 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:24.202 "subsystems": [ 00:20:24.202 { 00:20:24.202 "subsystem": "keyring", 00:20:24.202 "config": [ 00:20:24.202 { 00:20:24.202 "method": "keyring_file_add_key", 00:20:24.202 "params": { 00:20:24.202 "name": "key0", 00:20:24.202 "path": "/tmp/tmp.LespPu6TWf" 00:20:24.202 } 00:20:24.202 } 00:20:24.202 ] 00:20:24.202 }, 00:20:24.202 { 00:20:24.202 "subsystem": "iobuf", 00:20:24.202 "config": [ 00:20:24.202 { 00:20:24.202 "method": "iobuf_set_options", 00:20:24.202 "params": { 00:20:24.202 "small_pool_count": 8192, 00:20:24.202 "large_pool_count": 1024, 00:20:24.202 "small_bufsize": 8192, 00:20:24.202 "large_bufsize": 135168, 00:20:24.202 "enable_numa": false 00:20:24.202 } 00:20:24.202 } 00:20:24.202 ] 00:20:24.202 }, 00:20:24.202 { 00:20:24.202 "subsystem": "sock", 00:20:24.202 "config": [ 00:20:24.202 { 00:20:24.202 "method": "sock_set_default_impl", 00:20:24.202 "params": { 00:20:24.202 "impl_name": "posix" 00:20:24.202 } 00:20:24.202 }, 00:20:24.202 { 00:20:24.202 "method": "sock_impl_set_options", 00:20:24.203 "params": { 00:20:24.203 "impl_name": "ssl", 00:20:24.203 "recv_buf_size": 4096, 00:20:24.203 "send_buf_size": 4096, 00:20:24.203 "enable_recv_pipe": true, 00:20:24.203 "enable_quickack": false, 00:20:24.203 "enable_placement_id": 0, 00:20:24.203 "enable_zerocopy_send_server": true, 00:20:24.203 "enable_zerocopy_send_client": false, 00:20:24.203 "zerocopy_threshold": 0, 00:20:24.203 "tls_version": 0, 00:20:24.203 "enable_ktls": false 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "sock_impl_set_options", 00:20:24.203 "params": { 00:20:24.203 "impl_name": "posix", 00:20:24.203 "recv_buf_size": 2097152, 00:20:24.203 "send_buf_size": 2097152, 00:20:24.203 "enable_recv_pipe": true, 00:20:24.203 "enable_quickack": false, 00:20:24.203 "enable_placement_id": 0, 00:20:24.203 "enable_zerocopy_send_server": true, 00:20:24.203 "enable_zerocopy_send_client": false, 00:20:24.203 "zerocopy_threshold": 0, 00:20:24.203 "tls_version": 0, 00:20:24.203 "enable_ktls": false 00:20:24.203 } 00:20:24.203 } 00:20:24.203 ] 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "subsystem": "vmd", 00:20:24.203 "config": [] 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "subsystem": "accel", 00:20:24.203 "config": [ 00:20:24.203 { 00:20:24.203 "method": "accel_set_options", 00:20:24.203 "params": { 00:20:24.203 "small_cache_size": 128, 00:20:24.203 "large_cache_size": 16, 00:20:24.203 "task_count": 2048, 00:20:24.203 "sequence_count": 2048, 00:20:24.203 "buf_count": 2048 00:20:24.203 } 00:20:24.203 } 00:20:24.203 ] 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "subsystem": "bdev", 00:20:24.203 "config": [ 00:20:24.203 { 00:20:24.203 "method": "bdev_set_options", 00:20:24.203 "params": { 00:20:24.203 "bdev_io_pool_size": 65535, 00:20:24.203 "bdev_io_cache_size": 256, 00:20:24.203 "bdev_auto_examine": true, 00:20:24.203 "iobuf_small_cache_size": 128, 00:20:24.203 "iobuf_large_cache_size": 16 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "bdev_raid_set_options", 00:20:24.203 "params": { 00:20:24.203 "process_window_size_kb": 1024, 00:20:24.203 "process_max_bandwidth_mb_sec": 0 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "bdev_iscsi_set_options", 00:20:24.203 "params": { 00:20:24.203 "timeout_sec": 30 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "bdev_nvme_set_options", 00:20:24.203 "params": { 00:20:24.203 "action_on_timeout": "none", 00:20:24.203 "timeout_us": 0, 00:20:24.203 "timeout_admin_us": 0, 00:20:24.203 "keep_alive_timeout_ms": 10000, 00:20:24.203 "arbitration_burst": 0, 00:20:24.203 "low_priority_weight": 0, 00:20:24.203 "medium_priority_weight": 0, 00:20:24.203 "high_priority_weight": 0, 00:20:24.203 "nvme_adminq_poll_period_us": 10000, 00:20:24.203 "nvme_ioq_poll_period_us": 0, 00:20:24.203 "io_queue_requests": 0, 00:20:24.203 "delay_cmd_submit": true, 00:20:24.203 "transport_retry_count": 4, 00:20:24.203 "bdev_retry_count": 3, 00:20:24.203 "transport_ack_timeout": 0, 00:20:24.203 "ctrlr_loss_timeout_sec": 0, 00:20:24.203 "reconnect_delay_sec": 0, 00:20:24.203 "fast_io_fail_timeout_sec": 0, 00:20:24.203 "disable_auto_failback": false, 00:20:24.203 "generate_uuids": false, 00:20:24.203 "transport_tos": 0, 00:20:24.203 "nvme_error_stat": false, 00:20:24.203 "rdma_srq_size": 0, 00:20:24.203 "io_path_stat": false, 00:20:24.203 "allow_accel_sequence": false, 00:20:24.203 "rdma_max_cq_size": 0, 00:20:24.203 "rdma_cm_event_timeout_ms": 0, 00:20:24.203 "dhchap_digests": [ 00:20:24.203 "sha256", 00:20:24.203 "sha384", 00:20:24.203 "sha512" 00:20:24.203 ], 00:20:24.203 "dhchap_dhgroups": [ 00:20:24.203 "null", 00:20:24.203 "ffdhe2048", 00:20:24.203 "ffdhe3072", 00:20:24.203 "ffdhe4096", 00:20:24.203 "ffdhe6144", 00:20:24.203 "ffdhe8192" 00:20:24.203 ] 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "bdev_nvme_set_hotplug", 00:20:24.203 "params": { 00:20:24.203 "period_us": 100000, 00:20:24.203 "enable": false 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "bdev_malloc_create", 00:20:24.203 "params": { 00:20:24.203 "name": "malloc0", 00:20:24.203 "num_blocks": 8192, 00:20:24.203 "block_size": 4096, 00:20:24.203 "physical_block_size": 4096, 00:20:24.203 "uuid": "0b794641-f4ed-41da-a49d-ac7de77e3403", 00:20:24.203 "optimal_io_boundary": 0, 00:20:24.203 "md_size": 0, 00:20:24.203 "dif_type": 0, 00:20:24.203 "dif_is_head_of_md": false, 00:20:24.203 "dif_pi_format": 0 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "bdev_wait_for_examine" 00:20:24.203 } 00:20:24.203 ] 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "subsystem": "nbd", 00:20:24.203 "config": [] 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "subsystem": "scheduler", 00:20:24.203 "config": [ 00:20:24.203 { 00:20:24.203 "method": "framework_set_scheduler", 00:20:24.203 "params": { 00:20:24.203 "name": "static" 00:20:24.203 } 00:20:24.203 } 00:20:24.203 ] 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "subsystem": "nvmf", 00:20:24.203 "config": [ 00:20:24.203 { 00:20:24.203 "method": "nvmf_set_config", 00:20:24.203 "params": { 00:20:24.203 "discovery_filter": "match_any", 00:20:24.203 "admin_cmd_passthru": { 00:20:24.203 "identify_ctrlr": false 00:20:24.203 }, 00:20:24.203 "dhchap_digests": [ 00:20:24.203 "sha256", 00:20:24.203 "sha384", 00:20:24.203 "sha512" 00:20:24.203 ], 00:20:24.203 "dhchap_dhgroups": [ 00:20:24.203 "null", 00:20:24.203 "ffdhe2048", 00:20:24.203 "ffdhe3072", 00:20:24.203 "ffdhe4096", 00:20:24.203 "ffdhe6144", 00:20:24.203 "ffdhe8192" 00:20:24.203 ] 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "nvmf_set_max_subsystems", 00:20:24.203 "params": { 00:20:24.203 "max_subsystems": 1024 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "nvmf_set_crdt", 00:20:24.203 "params": { 00:20:24.203 "crdt1": 0, 00:20:24.203 "crdt2": 0, 00:20:24.203 "crdt3": 0 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "nvmf_create_transport", 00:20:24.203 "params": { 00:20:24.203 "trtype": "TCP", 00:20:24.203 "max_queue_depth": 128, 00:20:24.203 "max_io_qpairs_per_ctrlr": 127, 00:20:24.203 "in_capsule_data_size": 4096, 00:20:24.203 "max_io_size": 131072, 00:20:24.203 "io_unit_size": 131072, 00:20:24.203 "max_aq_depth": 128, 00:20:24.203 "num_shared_buffers": 511, 00:20:24.203 "buf_cache_size": 4294967295, 00:20:24.203 "dif_insert_or_strip": false, 00:20:24.203 "zcopy": false, 00:20:24.203 "c2h_success": false, 00:20:24.203 "sock_priority": 0, 00:20:24.203 "abort_timeout_sec": 1, 00:20:24.203 "ack_timeout": 0, 00:20:24.203 "data_wr_pool_size": 0 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "nvmf_create_subsystem", 00:20:24.203 "params": { 00:20:24.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.203 "allow_any_host": false, 00:20:24.203 "serial_number": "00000000000000000000", 00:20:24.203 "model_number": "SPDK bdev Controller", 00:20:24.203 "max_namespaces": 32, 00:20:24.203 "min_cntlid": 1, 00:20:24.203 "max_cntlid": 65519, 00:20:24.203 "ana_reporting": false 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "nvmf_subsystem_add_host", 00:20:24.203 "params": { 00:20:24.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.203 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.203 "psk": "key0" 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "nvmf_subsystem_add_ns", 00:20:24.203 "params": { 00:20:24.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.203 "namespace": { 00:20:24.203 "nsid": 1, 00:20:24.203 "bdev_name": "malloc0", 00:20:24.203 "nguid": "0B794641F4ED41DAA49DAC7DE77E3403", 00:20:24.203 "uuid": "0b794641-f4ed-41da-a49d-ac7de77e3403", 00:20:24.203 "no_auto_visible": false 00:20:24.203 } 00:20:24.203 } 00:20:24.203 }, 00:20:24.203 { 00:20:24.203 "method": "nvmf_subsystem_add_listener", 00:20:24.203 "params": { 00:20:24.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.203 "listen_address": { 00:20:24.203 "trtype": "TCP", 00:20:24.203 "adrfam": "IPv4", 00:20:24.203 "traddr": "10.0.0.2", 00:20:24.203 "trsvcid": "4420" 00:20:24.203 }, 00:20:24.203 "secure_channel": false, 00:20:24.203 "sock_impl": "ssl" 00:20:24.203 } 00:20:24.203 } 00:20:24.203 ] 00:20:24.203 } 00:20:24.203 ] 00:20:24.203 }' 00:20:24.203 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:24.203 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:24.203 "subsystems": [ 00:20:24.203 { 00:20:24.203 "subsystem": "keyring", 00:20:24.203 "config": [ 00:20:24.203 { 00:20:24.203 "method": "keyring_file_add_key", 00:20:24.203 "params": { 00:20:24.203 "name": "key0", 00:20:24.203 "path": "/tmp/tmp.LespPu6TWf" 00:20:24.204 } 00:20:24.204 } 00:20:24.204 ] 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "subsystem": "iobuf", 00:20:24.204 "config": [ 00:20:24.204 { 00:20:24.204 "method": "iobuf_set_options", 00:20:24.204 "params": { 00:20:24.204 "small_pool_count": 8192, 00:20:24.204 "large_pool_count": 1024, 00:20:24.204 "small_bufsize": 8192, 00:20:24.204 "large_bufsize": 135168, 00:20:24.204 "enable_numa": false 00:20:24.204 } 00:20:24.204 } 00:20:24.204 ] 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "subsystem": "sock", 00:20:24.204 "config": [ 00:20:24.204 { 00:20:24.204 "method": "sock_set_default_impl", 00:20:24.204 "params": { 00:20:24.204 "impl_name": "posix" 00:20:24.204 } 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "method": "sock_impl_set_options", 00:20:24.204 "params": { 00:20:24.204 "impl_name": "ssl", 00:20:24.204 "recv_buf_size": 4096, 00:20:24.204 "send_buf_size": 4096, 00:20:24.204 "enable_recv_pipe": true, 00:20:24.204 "enable_quickack": false, 00:20:24.204 "enable_placement_id": 0, 00:20:24.204 "enable_zerocopy_send_server": true, 00:20:24.204 "enable_zerocopy_send_client": false, 00:20:24.204 "zerocopy_threshold": 0, 00:20:24.204 "tls_version": 0, 00:20:24.204 "enable_ktls": false 00:20:24.204 } 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "method": "sock_impl_set_options", 00:20:24.204 "params": { 00:20:24.204 "impl_name": "posix", 00:20:24.204 "recv_buf_size": 2097152, 00:20:24.204 "send_buf_size": 2097152, 00:20:24.204 "enable_recv_pipe": true, 00:20:24.204 "enable_quickack": false, 00:20:24.204 "enable_placement_id": 0, 00:20:24.204 "enable_zerocopy_send_server": true, 00:20:24.204 "enable_zerocopy_send_client": false, 00:20:24.204 "zerocopy_threshold": 0, 00:20:24.204 "tls_version": 0, 00:20:24.204 "enable_ktls": false 00:20:24.204 } 00:20:24.204 } 00:20:24.204 ] 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "subsystem": "vmd", 00:20:24.204 "config": [] 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "subsystem": "accel", 00:20:24.204 "config": [ 00:20:24.204 { 00:20:24.204 "method": "accel_set_options", 00:20:24.204 "params": { 00:20:24.204 "small_cache_size": 128, 00:20:24.204 "large_cache_size": 16, 00:20:24.204 "task_count": 2048, 00:20:24.204 "sequence_count": 2048, 00:20:24.204 "buf_count": 2048 00:20:24.204 } 00:20:24.204 } 00:20:24.204 ] 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "subsystem": "bdev", 00:20:24.204 "config": [ 00:20:24.204 { 00:20:24.204 "method": "bdev_set_options", 00:20:24.204 "params": { 00:20:24.204 "bdev_io_pool_size": 65535, 00:20:24.204 "bdev_io_cache_size": 256, 00:20:24.204 "bdev_auto_examine": true, 00:20:24.204 "iobuf_small_cache_size": 128, 00:20:24.204 "iobuf_large_cache_size": 16 00:20:24.204 } 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "method": "bdev_raid_set_options", 00:20:24.204 "params": { 00:20:24.204 "process_window_size_kb": 1024, 00:20:24.204 "process_max_bandwidth_mb_sec": 0 00:20:24.204 } 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "method": "bdev_iscsi_set_options", 00:20:24.204 "params": { 00:20:24.204 "timeout_sec": 30 00:20:24.204 } 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "method": "bdev_nvme_set_options", 00:20:24.204 "params": { 00:20:24.204 "action_on_timeout": "none", 00:20:24.204 "timeout_us": 0, 00:20:24.204 "timeout_admin_us": 0, 00:20:24.204 "keep_alive_timeout_ms": 10000, 00:20:24.204 "arbitration_burst": 0, 00:20:24.204 "low_priority_weight": 0, 00:20:24.204 "medium_priority_weight": 0, 00:20:24.204 "high_priority_weight": 0, 00:20:24.204 "nvme_adminq_poll_period_us": 10000, 00:20:24.204 "nvme_ioq_poll_period_us": 0, 00:20:24.204 "io_queue_requests": 512, 00:20:24.204 "delay_cmd_submit": true, 00:20:24.204 "transport_retry_count": 4, 00:20:24.204 "bdev_retry_count": 3, 00:20:24.204 "transport_ack_timeout": 0, 00:20:24.204 "ctrlr_loss_timeout_sec": 0, 00:20:24.204 "reconnect_delay_sec": 0, 00:20:24.204 "fast_io_fail_timeout_sec": 0, 00:20:24.204 "disable_auto_failback": false, 00:20:24.204 "generate_uuids": false, 00:20:24.204 "transport_tos": 0, 00:20:24.204 "nvme_error_stat": false, 00:20:24.204 "rdma_srq_size": 0, 00:20:24.204 "io_path_stat": false, 00:20:24.204 "allow_accel_sequence": false, 00:20:24.204 "rdma_max_cq_size": 0, 00:20:24.204 "rdma_cm_event_timeout_ms": 0, 00:20:24.204 "dhchap_digests": [ 00:20:24.204 "sha256", 00:20:24.204 "sha384", 00:20:24.204 "sha512" 00:20:24.204 ], 00:20:24.204 "dhchap_dhgroups": [ 00:20:24.204 "null", 00:20:24.204 "ffdhe2048", 00:20:24.204 "ffdhe3072", 00:20:24.204 "ffdhe4096", 00:20:24.204 "ffdhe6144", 00:20:24.204 "ffdhe8192" 00:20:24.204 ] 00:20:24.204 } 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "method": "bdev_nvme_attach_controller", 00:20:24.204 "params": { 00:20:24.204 "name": "nvme0", 00:20:24.204 "trtype": "TCP", 00:20:24.204 "adrfam": "IPv4", 00:20:24.204 "traddr": "10.0.0.2", 00:20:24.204 "trsvcid": "4420", 00:20:24.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.204 "prchk_reftag": false, 00:20:24.204 "prchk_guard": false, 00:20:24.204 "ctrlr_loss_timeout_sec": 0, 00:20:24.204 "reconnect_delay_sec": 0, 00:20:24.204 "fast_io_fail_timeout_sec": 0, 00:20:24.204 "psk": "key0", 00:20:24.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.204 "hdgst": false, 00:20:24.204 "ddgst": false, 00:20:24.204 "multipath": "multipath" 00:20:24.204 } 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "method": "bdev_nvme_set_hotplug", 00:20:24.204 "params": { 00:20:24.204 "period_us": 100000, 00:20:24.204 "enable": false 00:20:24.204 } 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "method": "bdev_enable_histogram", 00:20:24.204 "params": { 00:20:24.204 "name": "nvme0n1", 00:20:24.204 "enable": true 00:20:24.204 } 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "method": "bdev_wait_for_examine" 00:20:24.204 } 00:20:24.204 ] 00:20:24.204 }, 00:20:24.204 { 00:20:24.204 "subsystem": "nbd", 00:20:24.204 "config": [] 00:20:24.204 } 00:20:24.204 ] 00:20:24.204 }' 00:20:24.204 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3450461 00:20:24.204 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3450461 ']' 00:20:24.204 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3450461 00:20:24.204 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.204 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.204 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3450461 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3450461' 00:20:24.466 killing process with pid 3450461 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3450461 00:20:24.466 Received shutdown signal, test time was about 1.000000 seconds 00:20:24.466 00:20:24.466 Latency(us) 00:20:24.466 [2024-12-06T10:19:30.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.466 [2024-12-06T10:19:30.633Z] =================================================================================================================== 00:20:24.466 [2024-12-06T10:19:30.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3450461 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3450241 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3450241 ']' 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3450241 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3450241 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3450241' 00:20:24.466 killing process with pid 3450241 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3450241 00:20:24.466 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3450241 00:20:24.783 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:24.783 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.783 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.783 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:24.783 "subsystems": [ 00:20:24.783 { 00:20:24.783 "subsystem": "keyring", 00:20:24.783 "config": [ 00:20:24.783 { 00:20:24.783 "method": "keyring_file_add_key", 00:20:24.783 "params": { 00:20:24.783 "name": "key0", 00:20:24.783 "path": "/tmp/tmp.LespPu6TWf" 00:20:24.783 } 00:20:24.783 } 00:20:24.783 ] 00:20:24.783 }, 00:20:24.783 { 00:20:24.783 "subsystem": "iobuf", 00:20:24.783 "config": [ 00:20:24.783 { 00:20:24.783 "method": "iobuf_set_options", 00:20:24.783 "params": { 00:20:24.783 "small_pool_count": 8192, 00:20:24.783 "large_pool_count": 1024, 00:20:24.783 "small_bufsize": 8192, 00:20:24.783 "large_bufsize": 135168, 00:20:24.783 "enable_numa": false 00:20:24.783 } 00:20:24.783 } 00:20:24.783 ] 00:20:24.783 }, 00:20:24.783 { 00:20:24.783 "subsystem": "sock", 00:20:24.783 "config": [ 00:20:24.783 { 00:20:24.783 "method": "sock_set_default_impl", 00:20:24.783 "params": { 00:20:24.783 "impl_name": "posix" 00:20:24.783 } 00:20:24.783 }, 00:20:24.783 { 00:20:24.783 "method": "sock_impl_set_options", 00:20:24.783 "params": { 00:20:24.783 "impl_name": "ssl", 00:20:24.783 "recv_buf_size": 4096, 00:20:24.783 "send_buf_size": 4096, 00:20:24.783 "enable_recv_pipe": true, 00:20:24.783 "enable_quickack": false, 00:20:24.783 "enable_placement_id": 0, 00:20:24.783 "enable_zerocopy_send_server": true, 00:20:24.783 "enable_zerocopy_send_client": false, 00:20:24.783 "zerocopy_threshold": 0, 00:20:24.783 "tls_version": 0, 00:20:24.783 "enable_ktls": false 00:20:24.783 } 00:20:24.783 }, 00:20:24.783 { 00:20:24.783 "method": "sock_impl_set_options", 00:20:24.783 "params": { 00:20:24.783 "impl_name": "posix", 00:20:24.783 "recv_buf_size": 2097152, 00:20:24.783 "send_buf_size": 2097152, 00:20:24.783 "enable_recv_pipe": true, 00:20:24.783 "enable_quickack": false, 00:20:24.783 "enable_placement_id": 0, 00:20:24.783 "enable_zerocopy_send_server": true, 00:20:24.783 "enable_zerocopy_send_client": false, 00:20:24.783 "zerocopy_threshold": 0, 00:20:24.783 "tls_version": 0, 00:20:24.783 "enable_ktls": false 00:20:24.783 } 00:20:24.783 } 00:20:24.783 ] 00:20:24.783 }, 00:20:24.783 { 00:20:24.783 "subsystem": "vmd", 00:20:24.783 "config": [] 00:20:24.783 }, 00:20:24.783 { 00:20:24.783 "subsystem": "accel", 00:20:24.783 "config": [ 00:20:24.783 { 00:20:24.783 "method": "accel_set_options", 00:20:24.783 "params": { 00:20:24.783 "small_cache_size": 128, 00:20:24.783 "large_cache_size": 16, 00:20:24.783 "task_count": 2048, 00:20:24.783 "sequence_count": 2048, 00:20:24.783 "buf_count": 2048 00:20:24.783 } 00:20:24.783 } 00:20:24.783 ] 00:20:24.783 }, 00:20:24.783 { 00:20:24.783 "subsystem": "bdev", 00:20:24.783 "config": [ 00:20:24.783 { 00:20:24.783 "method": "bdev_set_options", 00:20:24.783 "params": { 00:20:24.783 "bdev_io_pool_size": 65535, 00:20:24.783 "bdev_io_cache_size": 256, 00:20:24.783 "bdev_auto_examine": true, 00:20:24.783 "iobuf_small_cache_size": 128, 00:20:24.783 "iobuf_large_cache_size": 16 00:20:24.783 } 00:20:24.783 }, 00:20:24.783 { 00:20:24.783 "method": "bdev_raid_set_options", 00:20:24.783 "params": { 00:20:24.783 "process_window_size_kb": 1024, 00:20:24.783 "process_max_bandwidth_mb_sec": 0 00:20:24.783 } 00:20:24.783 }, 00:20:24.783 { 00:20:24.783 "method": "bdev_iscsi_set_options", 00:20:24.783 "params": { 00:20:24.783 "timeout_sec": 30 00:20:24.783 } 00:20:24.783 }, 00:20:24.783 { 00:20:24.783 "method": "bdev_nvme_set_options", 00:20:24.783 "params": { 00:20:24.783 "action_on_timeout": "none", 00:20:24.783 "timeout_us": 0, 00:20:24.783 "timeout_admin_us": 0, 00:20:24.783 "keep_alive_timeout_ms": 10000, 00:20:24.783 "arbitration_burst": 0, 00:20:24.783 "low_priority_weight": 0, 00:20:24.783 "medium_priority_weight": 0, 00:20:24.783 "high_priority_weight": 0, 00:20:24.783 "nvme_adminq_poll_period_us": 10000, 00:20:24.783 "nvme_ioq_poll_period_us": 0, 00:20:24.783 "io_queue_requests": 0, 00:20:24.783 "delay_cmd_submit": true, 00:20:24.783 "transport_retry_count": 4, 00:20:24.783 "bdev_retry_count": 3, 00:20:24.783 "transport_ack_timeout": 0, 00:20:24.783 "ctrlr_loss_timeout_sec": 0, 00:20:24.783 "reconnect_delay_sec": 0, 00:20:24.783 "fast_io_fail_timeout_sec": 0, 00:20:24.783 "disable_auto_failback": false, 00:20:24.783 "generate_uuids": false, 00:20:24.783 "transport_tos": 0, 00:20:24.783 "nvme_error_stat": false, 00:20:24.783 "rdma_srq_size": 0, 00:20:24.783 "io_path_stat": false, 00:20:24.783 "allow_accel_sequence": false, 00:20:24.783 "rdma_max_cq_size": 0, 00:20:24.783 "rdma_cm_event_timeout_ms": 0, 00:20:24.783 "dhchap_digests": [ 00:20:24.783 "sha256", 00:20:24.783 "sha384", 00:20:24.783 "sha512" 00:20:24.783 ], 00:20:24.783 "dhchap_dhgroups": [ 00:20:24.783 "null", 00:20:24.783 "ffdhe2048", 00:20:24.783 "ffdhe3072", 00:20:24.783 "ffdhe4096", 00:20:24.783 "ffdhe6144", 00:20:24.783 "ffdhe8192" 00:20:24.784 ] 00:20:24.784 } 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "method": "bdev_nvme_set_hotplug", 00:20:24.784 "params": { 00:20:24.784 "period_us": 100000, 00:20:24.784 "enable": false 00:20:24.784 } 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "method": "bdev_malloc_create", 00:20:24.784 "params": { 00:20:24.784 "name": "malloc0", 00:20:24.784 "num_blocks": 8192, 00:20:24.784 "block_size": 4096, 00:20:24.784 "physical_block_size": 4096, 00:20:24.784 "uuid": "0b794641-f4ed-41da-a49d-ac7de77e3403", 00:20:24.784 "optimal_io_boundary": 0, 00:20:24.784 "md_size": 0, 00:20:24.784 "dif_type": 0, 00:20:24.784 "dif_is_head_of_md": false, 00:20:24.784 "dif_pi_format": 0 00:20:24.784 } 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "method": "bdev_wait_for_examine" 00:20:24.784 } 00:20:24.784 ] 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "subsystem": "nbd", 00:20:24.784 "config": [] 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "subsystem": "scheduler", 00:20:24.784 "config": [ 00:20:24.784 { 00:20:24.784 "method": "framework_set_scheduler", 00:20:24.784 "params": { 00:20:24.784 "name": "static" 00:20:24.784 } 00:20:24.784 } 00:20:24.784 ] 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "subsystem": "nvmf", 00:20:24.784 "config": [ 00:20:24.784 { 00:20:24.784 "method": "nvmf_set_config", 00:20:24.784 "params": { 00:20:24.784 "discovery_filter": "match_any", 00:20:24.784 "admin_cmd_passthru": { 00:20:24.784 "identify_ctrlr": false 00:20:24.784 }, 00:20:24.784 "dhchap_digests": [ 00:20:24.784 "sha256", 00:20:24.784 "sha384", 00:20:24.784 "sha512" 00:20:24.784 ], 00:20:24.784 "dhchap_dhgroups": [ 00:20:24.784 "null", 00:20:24.784 "ffdhe2048", 00:20:24.784 "ffdhe3072", 00:20:24.784 "ffdhe4096", 00:20:24.784 "ffdhe6144", 00:20:24.784 "ffdhe8192" 00:20:24.784 ] 00:20:24.784 } 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "method": "nvmf_set_max_subsystems", 00:20:24.784 "params": { 00:20:24.784 "max_subsystems": 1024 00:20:24.784 } 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "method": "nvmf_set_crdt", 00:20:24.784 "params": { 00:20:24.784 "crdt1": 0, 00:20:24.784 "crdt2": 0, 00:20:24.784 "crdt3": 0 00:20:24.784 } 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "method": "nvmf_create_transport", 00:20:24.784 "params": { 00:20:24.784 "trtype": "TCP", 00:20:24.784 "max_queue_depth": 128, 00:20:24.784 "max_io_qpairs_per_ctrlr": 127, 00:20:24.784 "in_capsule_data_size": 4096, 00:20:24.784 "max_io_size": 131072, 00:20:24.784 "io_unit_size": 131072, 00:20:24.784 "max_aq_depth": 128, 00:20:24.784 "num_shared_buffers": 511, 00:20:24.784 "buf_cache_size": 4294967295, 00:20:24.784 "dif_insert_or_strip": false, 00:20:24.784 "zcopy": false, 00:20:24.784 "c2h_success": false, 00:20:24.784 "sock_priority": 0, 00:20:24.784 "abort_timeout_sec": 1, 00:20:24.784 "ack_timeout": 0, 00:20:24.784 "data_wr_pool_size": 0 00:20:24.784 } 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "method": "nvmf_create_subsystem", 00:20:24.784 "params": { 00:20:24.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.784 "allow_any_host": false, 00:20:24.784 "serial_number": "00000000000000000000", 00:20:24.784 "model_number": "SPDK bdev Controller", 00:20:24.784 "max_namespaces": 32, 00:20:24.784 "min_cntlid": 1, 00:20:24.784 "max_cntlid": 65519, 00:20:24.784 "ana_reporting": false 00:20:24.784 } 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "method": "nvmf_subsystem_add_host", 00:20:24.784 "params": { 00:20:24.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.784 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.784 "psk": "key0" 00:20:24.784 } 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "method": "nvmf_subsystem_add_ns", 00:20:24.784 "params": { 00:20:24.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.784 "namespace": { 00:20:24.784 "nsid": 1, 00:20:24.784 "bdev_name": "malloc0", 00:20:24.784 "nguid": "0B794641F4ED41DAA49DAC7DE77E3403", 00:20:24.784 "uuid": "0b794641-f4ed-41da-a49d-ac7de77e3403", 00:20:24.784 "no_auto_visible": false 00:20:24.784 } 00:20:24.784 } 00:20:24.784 }, 00:20:24.784 { 00:20:24.784 "method": "nvmf_subsystem_add_listener", 00:20:24.784 "params": { 00:20:24.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.784 "listen_address": { 00:20:24.784 "trtype": "TCP", 00:20:24.784 "adrfam": "IPv4", 00:20:24.784 "traddr": "10.0.0.2", 00:20:24.784 "trsvcid": "4420" 00:20:24.784 }, 00:20:24.784 "secure_channel": false, 00:20:24.784 "sock_impl": "ssl" 00:20:24.784 } 00:20:24.784 } 00:20:24.784 ] 00:20:24.784 } 00:20:24.784 ] 00:20:24.784 }' 00:20:24.784 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.784 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3451147 00:20:24.784 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3451147 00:20:24.784 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:24.784 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3451147 ']' 00:20:24.784 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.784 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.784 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.784 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.785 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.785 [2024-12-06 11:19:30.773620] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:24.785 [2024-12-06 11:19:30.773676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.785 [2024-12-06 11:19:30.857424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.785 [2024-12-06 11:19:30.891633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.785 [2024-12-06 11:19:30.891667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.785 [2024-12-06 11:19:30.891675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.785 [2024-12-06 11:19:30.891682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.785 [2024-12-06 11:19:30.891687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.785 [2024-12-06 11:19:30.892290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.072 [2024-12-06 11:19:31.092314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.072 [2024-12-06 11:19:31.124331] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.072 [2024-12-06 11:19:31.124563] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3451199 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3451199 /var/tmp/bdevperf.sock 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3451199 ']' 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.667 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:25.667 "subsystems": [ 00:20:25.667 { 00:20:25.667 "subsystem": "keyring", 00:20:25.667 "config": [ 00:20:25.667 { 00:20:25.667 "method": "keyring_file_add_key", 00:20:25.667 "params": { 00:20:25.667 "name": "key0", 00:20:25.667 "path": "/tmp/tmp.LespPu6TWf" 00:20:25.667 } 00:20:25.667 } 00:20:25.667 ] 00:20:25.667 }, 00:20:25.667 { 00:20:25.667 "subsystem": "iobuf", 00:20:25.667 "config": [ 00:20:25.667 { 00:20:25.667 "method": "iobuf_set_options", 00:20:25.667 "params": { 00:20:25.667 "small_pool_count": 8192, 00:20:25.667 "large_pool_count": 1024, 00:20:25.667 "small_bufsize": 8192, 00:20:25.667 "large_bufsize": 135168, 00:20:25.667 "enable_numa": false 00:20:25.667 } 00:20:25.667 } 00:20:25.667 ] 00:20:25.667 }, 00:20:25.667 { 00:20:25.667 "subsystem": "sock", 00:20:25.667 "config": [ 00:20:25.667 { 00:20:25.667 "method": "sock_set_default_impl", 00:20:25.667 "params": { 00:20:25.667 "impl_name": "posix" 00:20:25.667 } 00:20:25.667 }, 00:20:25.667 { 00:20:25.667 "method": "sock_impl_set_options", 00:20:25.667 "params": { 00:20:25.667 "impl_name": "ssl", 00:20:25.667 "recv_buf_size": 4096, 00:20:25.667 "send_buf_size": 4096, 00:20:25.667 "enable_recv_pipe": true, 00:20:25.667 "enable_quickack": false, 00:20:25.667 "enable_placement_id": 0, 00:20:25.667 "enable_zerocopy_send_server": true, 00:20:25.667 "enable_zerocopy_send_client": false, 00:20:25.667 "zerocopy_threshold": 0, 00:20:25.667 "tls_version": 0, 00:20:25.667 "enable_ktls": false 00:20:25.667 } 00:20:25.667 }, 00:20:25.667 { 00:20:25.667 "method": "sock_impl_set_options", 00:20:25.667 "params": { 00:20:25.667 "impl_name": "posix", 00:20:25.667 "recv_buf_size": 2097152, 00:20:25.667 "send_buf_size": 2097152, 00:20:25.667 "enable_recv_pipe": true, 00:20:25.667 "enable_quickack": false, 00:20:25.667 "enable_placement_id": 0, 00:20:25.667 "enable_zerocopy_send_server": true, 00:20:25.667 "enable_zerocopy_send_client": false, 00:20:25.667 "zerocopy_threshold": 0, 00:20:25.667 "tls_version": 0, 00:20:25.667 "enable_ktls": false 00:20:25.667 } 00:20:25.667 } 00:20:25.667 ] 00:20:25.667 }, 00:20:25.667 { 00:20:25.667 "subsystem": "vmd", 00:20:25.667 "config": [] 00:20:25.667 }, 00:20:25.667 { 00:20:25.667 "subsystem": "accel", 00:20:25.667 "config": [ 00:20:25.667 { 00:20:25.667 "method": "accel_set_options", 00:20:25.667 "params": { 00:20:25.667 "small_cache_size": 128, 00:20:25.667 "large_cache_size": 16, 00:20:25.667 "task_count": 2048, 00:20:25.667 "sequence_count": 2048, 00:20:25.667 "buf_count": 2048 00:20:25.667 } 00:20:25.667 } 00:20:25.667 ] 00:20:25.667 }, 00:20:25.667 { 00:20:25.667 "subsystem": "bdev", 00:20:25.667 "config": [ 00:20:25.667 { 00:20:25.667 "method": "bdev_set_options", 00:20:25.667 "params": { 00:20:25.667 "bdev_io_pool_size": 65535, 00:20:25.667 "bdev_io_cache_size": 256, 00:20:25.667 "bdev_auto_examine": true, 00:20:25.667 "iobuf_small_cache_size": 128, 00:20:25.667 "iobuf_large_cache_size": 16 00:20:25.667 } 00:20:25.667 }, 00:20:25.667 { 00:20:25.667 "method": "bdev_raid_set_options", 00:20:25.667 "params": { 00:20:25.667 "process_window_size_kb": 1024, 00:20:25.667 "process_max_bandwidth_mb_sec": 0 00:20:25.667 } 00:20:25.667 }, 00:20:25.667 { 00:20:25.667 "method": "bdev_iscsi_set_options", 00:20:25.667 "params": { 00:20:25.667 "timeout_sec": 30 00:20:25.667 } 00:20:25.667 }, 00:20:25.667 { 00:20:25.667 "method": "bdev_nvme_set_options", 00:20:25.667 "params": { 00:20:25.667 "action_on_timeout": "none", 00:20:25.667 "timeout_us": 0, 00:20:25.667 "timeout_admin_us": 0, 00:20:25.667 "keep_alive_timeout_ms": 10000, 00:20:25.667 "arbitration_burst": 0, 00:20:25.667 "low_priority_weight": 0, 00:20:25.667 "medium_priority_weight": 0, 00:20:25.667 "high_priority_weight": 0, 00:20:25.667 "nvme_adminq_poll_period_us": 10000, 00:20:25.667 "nvme_ioq_poll_period_us": 0, 00:20:25.667 "io_queue_requests": 512, 00:20:25.667 "delay_cmd_submit": true, 00:20:25.667 "transport_retry_count": 4, 00:20:25.667 "bdev_retry_count": 3, 00:20:25.667 "transport_ack_timeout": 0, 00:20:25.667 "ctrlr_loss_timeout_sec": 0, 00:20:25.667 "reconnect_delay_sec": 0, 00:20:25.667 "fast_io_fail_timeout_sec": 0, 00:20:25.667 "disable_auto_failback": false, 00:20:25.667 "generate_uuids": false, 00:20:25.667 "transport_tos": 0, 00:20:25.667 "nvme_error_stat": false, 00:20:25.668 "rdma_srq_size": 0, 00:20:25.668 "io_path_stat": false, 00:20:25.668 "allow_accel_sequence": false, 00:20:25.668 "rdma_max_cq_size": 0, 00:20:25.668 "rdma_cm_event_timeout_ms": 0, 00:20:25.668 "dhchap_digests": [ 00:20:25.668 "sha256", 00:20:25.668 "sha384", 00:20:25.668 "sha512" 00:20:25.668 ], 00:20:25.668 "dhchap_dhgroups": [ 00:20:25.668 "null", 00:20:25.668 "ffdhe2048", 00:20:25.668 "ffdhe3072", 00:20:25.668 "ffdhe4096", 00:20:25.668 "ffdhe6144", 00:20:25.668 "ffdhe8192" 00:20:25.668 ] 00:20:25.668 } 00:20:25.668 }, 00:20:25.668 { 00:20:25.668 "method": "bdev_nvme_attach_controller", 00:20:25.668 "params": { 00:20:25.668 "name": "nvme0", 00:20:25.668 "trtype": "TCP", 00:20:25.668 "adrfam": "IPv4", 00:20:25.668 "traddr": "10.0.0.2", 00:20:25.668 "trsvcid": "4420", 00:20:25.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.668 "prchk_reftag": false, 00:20:25.668 "prchk_guard": false, 00:20:25.668 "ctrlr_loss_timeout_sec": 0, 00:20:25.668 "reconnect_delay_sec": 0, 00:20:25.668 "fast_io_fail_timeout_sec": 0, 00:20:25.668 "psk": "key0", 00:20:25.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.668 "hdgst": false, 00:20:25.668 "ddgst": false, 00:20:25.668 "multipath": "multipath" 00:20:25.668 } 00:20:25.668 }, 00:20:25.668 { 00:20:25.668 "method": "bdev_nvme_set_hotplug", 00:20:25.668 "params": { 00:20:25.668 "period_us": 100000, 00:20:25.668 "enable": false 00:20:25.668 } 00:20:25.668 }, 00:20:25.668 { 00:20:25.668 "method": "bdev_enable_histogram", 00:20:25.668 "params": { 00:20:25.668 "name": "nvme0n1", 00:20:25.668 "enable": true 00:20:25.668 } 00:20:25.668 }, 00:20:25.668 { 00:20:25.668 "method": "bdev_wait_for_examine" 00:20:25.668 } 00:20:25.668 ] 00:20:25.668 }, 00:20:25.668 { 00:20:25.668 "subsystem": "nbd", 00:20:25.668 "config": [] 00:20:25.668 } 00:20:25.668 ] 00:20:25.668 }' 00:20:25.668 [2024-12-06 11:19:31.655599] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:25.668 [2024-12-06 11:19:31.655668] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451199 ] 00:20:25.668 [2024-12-06 11:19:31.752888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.668 [2024-12-06 11:19:31.783042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.929 [2024-12-06 11:19:31.919503] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.500 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.500 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:26.500 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:26.500 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:26.500 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.500 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:26.761 Running I/O for 1 seconds... 00:20:27.703 5310.00 IOPS, 20.74 MiB/s 00:20:27.703 Latency(us) 00:20:27.703 [2024-12-06T10:19:33.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.703 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:27.703 Verification LBA range: start 0x0 length 0x2000 00:20:27.703 nvme0n1 : 1.01 5358.85 20.93 0.00 0.00 23725.50 4642.13 62477.65 00:20:27.703 [2024-12-06T10:19:33.870Z] =================================================================================================================== 00:20:27.703 [2024-12-06T10:19:33.870Z] Total : 5358.85 20.93 0.00 0.00 23725.50 4642.13 62477.65 00:20:27.703 { 00:20:27.703 "results": [ 00:20:27.703 { 00:20:27.703 "job": "nvme0n1", 00:20:27.703 "core_mask": "0x2", 00:20:27.703 "workload": "verify", 00:20:27.703 "status": "finished", 00:20:27.703 "verify_range": { 00:20:27.703 "start": 0, 00:20:27.703 "length": 8192 00:20:27.703 }, 00:20:27.703 "queue_depth": 128, 00:20:27.703 "io_size": 4096, 00:20:27.703 "runtime": 1.01477, 00:20:27.703 "iops": 5358.849788622052, 00:20:27.703 "mibps": 20.93300698680489, 00:20:27.703 "io_failed": 0, 00:20:27.703 "io_timeout": 0, 00:20:27.703 "avg_latency_us": 23725.50378815741, 00:20:27.703 "min_latency_us": 4642.133333333333, 00:20:27.703 "max_latency_us": 62477.653333333335 00:20:27.703 } 00:20:27.703 ], 00:20:27.703 "core_count": 1 00:20:27.703 } 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:27.703 nvmf_trace.0 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3451199 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3451199 ']' 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3451199 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.703 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451199 00:20:27.964 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:27.964 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:27.964 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451199' 00:20:27.964 killing process with pid 3451199 00:20:27.964 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3451199 00:20:27.964 Received shutdown signal, test time was about 1.000000 seconds 00:20:27.964 00:20:27.964 Latency(us) 00:20:27.964 [2024-12-06T10:19:34.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.964 [2024-12-06T10:19:34.131Z] =================================================================================================================== 00:20:27.964 [2024-12-06T10:19:34.131Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.964 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3451199 00:20:27.964 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:27.964 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:27.964 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:27.964 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.964 rmmod nvme_tcp 00:20:27.964 rmmod nvme_fabrics 00:20:27.964 rmmod nvme_keyring 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3451147 ']' 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3451147 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3451147 ']' 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3451147 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.964 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451147 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451147' 00:20:28.225 killing process with pid 3451147 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3451147 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3451147 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.225 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.DRwEjVlhru /tmp/tmp.gVuBHpWWEW /tmp/tmp.LespPu6TWf 00:20:30.769 00:20:30.769 real 1m24.294s 00:20:30.769 user 2m9.718s 00:20:30.769 sys 0m27.708s 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.769 ************************************ 00:20:30.769 END TEST nvmf_tls 00:20:30.769 ************************************ 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:30.769 ************************************ 00:20:30.769 START TEST nvmf_fips 00:20:30.769 ************************************ 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:30.769 * Looking for test storage... 00:20:30.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:30.769 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:30.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.770 --rc genhtml_branch_coverage=1 00:20:30.770 --rc genhtml_function_coverage=1 00:20:30.770 --rc genhtml_legend=1 00:20:30.770 --rc geninfo_all_blocks=1 00:20:30.770 --rc geninfo_unexecuted_blocks=1 00:20:30.770 00:20:30.770 ' 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:30.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.770 --rc genhtml_branch_coverage=1 00:20:30.770 --rc genhtml_function_coverage=1 00:20:30.770 --rc genhtml_legend=1 00:20:30.770 --rc geninfo_all_blocks=1 00:20:30.770 --rc geninfo_unexecuted_blocks=1 00:20:30.770 00:20:30.770 ' 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:30.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.770 --rc genhtml_branch_coverage=1 00:20:30.770 --rc genhtml_function_coverage=1 00:20:30.770 --rc genhtml_legend=1 00:20:30.770 --rc geninfo_all_blocks=1 00:20:30.770 --rc geninfo_unexecuted_blocks=1 00:20:30.770 00:20:30.770 ' 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:30.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.770 --rc genhtml_branch_coverage=1 00:20:30.770 --rc genhtml_function_coverage=1 00:20:30.770 --rc genhtml_legend=1 00:20:30.770 --rc geninfo_all_blocks=1 00:20:30.770 --rc geninfo_unexecuted_blocks=1 00:20:30.770 00:20:30.770 ' 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:30.770 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:30.771 Error setting digest 00:20:30.771 401244EE737F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:30.771 401244EE737F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:30.771 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.915 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:38.916 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:38.916 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:38.916 Found net devices under 0000:31:00.0: cvl_0_0 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:38.916 Found net devices under 0000:31:00.1: cvl_0_1 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.916 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:39.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:20:39.179 00:20:39.179 --- 10.0.0.2 ping statistics --- 00:20:39.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.179 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:39.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:20:39.179 00:20:39.179 --- 10.0.0.1 ping statistics --- 00:20:39.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.179 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3456565 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3456565 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3456565 ']' 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.179 11:19:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:39.439 [2024-12-06 11:19:45.389988] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:39.439 [2024-12-06 11:19:45.390062] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.439 [2024-12-06 11:19:45.496577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.439 [2024-12-06 11:19:45.546713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.439 [2024-12-06 11:19:45.546767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.439 [2024-12-06 11:19:45.546776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.439 [2024-12-06 11:19:45.546784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.439 [2024-12-06 11:19:45.546791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.439 [2024-12-06 11:19:45.547613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.W1Z 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.W1Z 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.W1Z 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.W1Z 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:40.382 [2024-12-06 11:19:46.410618] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.382 [2024-12-06 11:19:46.426612] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.382 [2024-12-06 11:19:46.426969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.382 malloc0 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3456876 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3456876 /var/tmp/bdevperf.sock 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3456876 ']' 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.382 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:40.642 [2024-12-06 11:19:46.570652] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:40.642 [2024-12-06 11:19:46.570730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456876 ] 00:20:40.642 [2024-12-06 11:19:46.640026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.642 [2024-12-06 11:19:46.675856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.213 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.213 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:41.213 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.W1Z 00:20:41.474 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:41.735 [2024-12-06 11:19:47.659954] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.735 TLSTESTn1 00:20:41.735 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:41.735 Running I/O for 10 seconds... 00:20:43.691 6063.00 IOPS, 23.68 MiB/s [2024-12-06T10:19:51.241Z] 6374.50 IOPS, 24.90 MiB/s [2024-12-06T10:19:52.181Z] 6363.33 IOPS, 24.86 MiB/s [2024-12-06T10:19:53.124Z] 6326.50 IOPS, 24.71 MiB/s [2024-12-06T10:19:54.065Z] 6368.20 IOPS, 24.88 MiB/s [2024-12-06T10:19:55.008Z] 6394.67 IOPS, 24.98 MiB/s [2024-12-06T10:19:55.948Z] 6294.57 IOPS, 24.59 MiB/s [2024-12-06T10:19:56.891Z] 6339.50 IOPS, 24.76 MiB/s [2024-12-06T10:19:58.276Z] 6368.33 IOPS, 24.88 MiB/s [2024-12-06T10:19:58.276Z] 6406.60 IOPS, 25.03 MiB/s 00:20:52.109 Latency(us) 00:20:52.109 [2024-12-06T10:19:58.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.109 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:52.109 Verification LBA range: start 0x0 length 0x2000 00:20:52.109 TLSTESTn1 : 10.01 6412.78 25.05 0.00 0.00 19929.87 4232.53 33423.36 00:20:52.109 [2024-12-06T10:19:58.276Z] =================================================================================================================== 00:20:52.110 [2024-12-06T10:19:58.277Z] Total : 6412.78 25.05 0.00 0.00 19929.87 4232.53 33423.36 00:20:52.110 { 00:20:52.110 "results": [ 00:20:52.110 { 00:20:52.110 "job": "TLSTESTn1", 00:20:52.110 "core_mask": "0x4", 00:20:52.110 "workload": "verify", 00:20:52.110 "status": "finished", 00:20:52.110 "verify_range": { 00:20:52.110 "start": 0, 00:20:52.110 "length": 8192 00:20:52.110 }, 00:20:52.110 "queue_depth": 128, 00:20:52.110 "io_size": 4096, 00:20:52.110 "runtime": 10.010164, 00:20:52.110 "iops": 6412.782048326081, 00:20:52.110 "mibps": 25.049929876273755, 00:20:52.110 "io_failed": 0, 00:20:52.110 "io_timeout": 0, 00:20:52.110 "avg_latency_us": 19929.8733756017, 00:20:52.110 "min_latency_us": 4232.533333333334, 00:20:52.110 "max_latency_us": 33423.36 00:20:52.110 } 00:20:52.110 ], 00:20:52.110 "core_count": 1 00:20:52.110 } 00:20:52.110 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:52.110 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:52.110 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:52.110 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:52.110 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:52.110 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:52.110 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:52.110 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:52.110 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:52.110 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:52.110 nvmf_trace.0 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3456876 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3456876 ']' 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3456876 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3456876 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3456876' 00:20:52.110 killing process with pid 3456876 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3456876 00:20:52.110 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.110 00:20:52.110 Latency(us) 00:20:52.110 [2024-12-06T10:19:58.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.110 [2024-12-06T10:19:58.277Z] =================================================================================================================== 00:20:52.110 [2024-12-06T10:19:58.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3456876 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:52.110 rmmod nvme_tcp 00:20:52.110 rmmod nvme_fabrics 00:20:52.110 rmmod nvme_keyring 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3456565 ']' 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3456565 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3456565 ']' 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3456565 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.110 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3456565 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3456565' 00:20:52.372 killing process with pid 3456565 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3456565 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3456565 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.372 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.W1Z 00:20:54.920 00:20:54.920 real 0m24.105s 00:20:54.920 user 0m23.637s 00:20:54.920 sys 0m10.537s 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:54.920 ************************************ 00:20:54.920 END TEST nvmf_fips 00:20:54.920 ************************************ 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:54.920 ************************************ 00:20:54.920 START TEST nvmf_control_msg_list 00:20:54.920 ************************************ 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:54.920 * Looking for test storage... 00:20:54.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.920 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:54.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.920 --rc genhtml_branch_coverage=1 00:20:54.920 --rc genhtml_function_coverage=1 00:20:54.920 --rc genhtml_legend=1 00:20:54.920 --rc geninfo_all_blocks=1 00:20:54.920 --rc geninfo_unexecuted_blocks=1 00:20:54.920 00:20:54.920 ' 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:54.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.921 --rc genhtml_branch_coverage=1 00:20:54.921 --rc genhtml_function_coverage=1 00:20:54.921 --rc genhtml_legend=1 00:20:54.921 --rc geninfo_all_blocks=1 00:20:54.921 --rc geninfo_unexecuted_blocks=1 00:20:54.921 00:20:54.921 ' 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:54.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.921 --rc genhtml_branch_coverage=1 00:20:54.921 --rc genhtml_function_coverage=1 00:20:54.921 --rc genhtml_legend=1 00:20:54.921 --rc geninfo_all_blocks=1 00:20:54.921 --rc geninfo_unexecuted_blocks=1 00:20:54.921 00:20:54.921 ' 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:54.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.921 --rc genhtml_branch_coverage=1 00:20:54.921 --rc genhtml_function_coverage=1 00:20:54.921 --rc genhtml_legend=1 00:20:54.921 --rc geninfo_all_blocks=1 00:20:54.921 --rc geninfo_unexecuted_blocks=1 00:20:54.921 00:20:54.921 ' 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:54.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:54.921 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:03.065 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:03.065 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:03.065 Found net devices under 0000:31:00.0: cvl_0_0 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.065 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:03.066 Found net devices under 0000:31:00.1: cvl_0_1 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.066 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:03.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:21:03.066 00:21:03.066 --- 10.0.0.2 ping statistics --- 00:21:03.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.066 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:21:03.066 00:21:03.066 --- 10.0.0.1 ping statistics --- 00:21:03.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.066 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:03.066 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3463829 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3463829 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3463829 ']' 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:03.328 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:03.328 [2024-12-06 11:20:09.321741] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:21:03.328 [2024-12-06 11:20:09.321808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.328 [2024-12-06 11:20:09.412586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.328 [2024-12-06 11:20:09.452510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.328 [2024-12-06 11:20:09.452545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.328 [2024-12-06 11:20:09.452558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.328 [2024-12-06 11:20:09.452565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.328 [2024-12-06 11:20:09.452570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.328 [2024-12-06 11:20:09.453167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:04.272 [2024-12-06 11:20:10.155537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:04.272 Malloc0 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:04.272 [2024-12-06 11:20:10.190375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3463983 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3463984 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3463985 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3463983 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:04.272 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:04.272 [2024-12-06 11:20:10.258803] subsystem.c:1791:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:04.272 [2024-12-06 11:20:10.268714] subsystem.c:1791:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:04.272 [2024-12-06 11:20:10.288739] subsystem.c:1791:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:05.656 Initializing NVMe Controllers 00:21:05.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:05.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:05.656 Initialization complete. Launching workers. 00:21:05.656 ======================================================== 00:21:05.656 Latency(us) 00:21:05.656 Device Information : IOPS MiB/s Average min max 00:21:05.656 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40895.26 40754.59 40943.88 00:21:05.656 ======================================================== 00:21:05.656 Total : 25.00 0.10 40895.26 40754.59 40943.88 00:21:05.657 00:21:05.657 Initializing NVMe Controllers 00:21:05.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:05.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:05.657 Initialization complete. Launching workers. 00:21:05.657 ======================================================== 00:21:05.657 Latency(us) 00:21:05.657 Device Information : IOPS MiB/s Average min max 00:21:05.657 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40893.45 40649.22 41025.26 00:21:05.657 ======================================================== 00:21:05.657 Total : 25.00 0.10 40893.45 40649.22 41025.26 00:21:05.657 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3463984 00:21:05.657 Initializing NVMe Controllers 00:21:05.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:05.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:05.657 Initialization complete. Launching workers. 00:21:05.657 ======================================================== 00:21:05.657 Latency(us) 00:21:05.657 Device Information : IOPS MiB/s Average min max 00:21:05.657 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40892.77 40663.79 40993.88 00:21:05.657 ======================================================== 00:21:05.657 Total : 25.00 0.10 40892.77 40663.79 40993.88 00:21:05.657 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3463985 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.657 rmmod nvme_tcp 00:21:05.657 rmmod nvme_fabrics 00:21:05.657 rmmod nvme_keyring 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3463829 ']' 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3463829 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3463829 ']' 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3463829 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3463829 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3463829' 00:21:05.657 killing process with pid 3463829 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3463829 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3463829 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.657 11:20:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.206 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:08.206 00:21:08.206 real 0m13.214s 00:21:08.206 user 0m8.219s 00:21:08.206 sys 0m7.191s 00:21:08.206 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.206 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:08.206 ************************************ 00:21:08.206 END TEST nvmf_control_msg_list 00:21:08.206 ************************************ 00:21:08.206 11:20:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:08.206 11:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:08.206 11:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.206 11:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:08.206 ************************************ 00:21:08.206 START TEST nvmf_wait_for_buf 00:21:08.206 ************************************ 00:21:08.206 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:08.206 * Looking for test storage... 00:21:08.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:08.206 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:08.206 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.207 --rc genhtml_branch_coverage=1 00:21:08.207 --rc genhtml_function_coverage=1 00:21:08.207 --rc genhtml_legend=1 00:21:08.207 --rc geninfo_all_blocks=1 00:21:08.207 --rc geninfo_unexecuted_blocks=1 00:21:08.207 00:21:08.207 ' 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.207 --rc genhtml_branch_coverage=1 00:21:08.207 --rc genhtml_function_coverage=1 00:21:08.207 --rc genhtml_legend=1 00:21:08.207 --rc geninfo_all_blocks=1 00:21:08.207 --rc geninfo_unexecuted_blocks=1 00:21:08.207 00:21:08.207 ' 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.207 --rc genhtml_branch_coverage=1 00:21:08.207 --rc genhtml_function_coverage=1 00:21:08.207 --rc genhtml_legend=1 00:21:08.207 --rc geninfo_all_blocks=1 00:21:08.207 --rc geninfo_unexecuted_blocks=1 00:21:08.207 00:21:08.207 ' 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.207 --rc genhtml_branch_coverage=1 00:21:08.207 --rc genhtml_function_coverage=1 00:21:08.207 --rc genhtml_legend=1 00:21:08.207 --rc geninfo_all_blocks=1 00:21:08.207 --rc geninfo_unexecuted_blocks=1 00:21:08.207 00:21:08.207 ' 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.207 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:08.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.208 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:16.353 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.353 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:16.353 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:16.353 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:16.353 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:16.353 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:16.353 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:16.353 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:16.354 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:16.354 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:16.354 Found net devices under 0000:31:00.0: cvl_0_0 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:16.354 Found net devices under 0000:31:00.1: cvl_0_1 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:16.354 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:16.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:21:16.615 00:21:16.615 --- 10.0.0.2 ping statistics --- 00:21:16.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.615 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:21:16.615 00:21:16.615 --- 10.0.0.1 ping statistics --- 00:21:16.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.615 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3469000 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3469000 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3469000 ']' 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.615 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:16.615 [2024-12-06 11:20:22.772851] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:21:16.615 [2024-12-06 11:20:22.772921] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.877 [2024-12-06 11:20:22.866407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.877 [2024-12-06 11:20:22.905756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.877 [2024-12-06 11:20:22.905791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.877 [2024-12-06 11:20:22.905799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.877 [2024-12-06 11:20:22.905805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.877 [2024-12-06 11:20:22.905811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.877 [2024-12-06 11:20:22.906454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.449 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.449 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:17.449 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.449 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.449 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:17.449 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.449 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:17.449 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:17.449 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:17.449 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.449 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:17.710 Malloc0 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:17.710 [2024-12-06 11:20:23.702592] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:17.710 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.711 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:17.711 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.711 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:17.711 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.711 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:17.711 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.711 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:17.711 [2024-12-06 11:20:23.738823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.711 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.711 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:17.711 [2024-12-06 11:20:23.848941] subsystem.c:1791:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:19.096 Initializing NVMe Controllers 00:21:19.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:19.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:19.096 Initialization complete. Launching workers. 00:21:19.096 ======================================================== 00:21:19.096 Latency(us) 00:21:19.096 Device Information : IOPS MiB/s Average min max 00:21:19.096 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32272.37 8000.30 62853.74 00:21:19.096 ======================================================== 00:21:19.096 Total : 129.00 16.12 32272.37 8000.30 62853.74 00:21:19.096 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.096 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:19.096 rmmod nvme_tcp 00:21:19.358 rmmod nvme_fabrics 00:21:19.358 rmmod nvme_keyring 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3469000 ']' 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3469000 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3469000 ']' 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3469000 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3469000 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3469000' 00:21:19.358 killing process with pid 3469000 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3469000 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3469000 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.358 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.901 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:21.901 00:21:21.901 real 0m13.684s 00:21:21.901 user 0m5.297s 00:21:21.901 sys 0m6.958s 00:21:21.901 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.901 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:21.901 ************************************ 00:21:21.901 END TEST nvmf_wait_for_buf 00:21:21.901 ************************************ 00:21:21.901 11:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:21.901 11:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:21.901 11:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:21.901 11:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:21.901 11:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.901 11:20:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:30.047 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:30.047 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.047 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:30.048 Found net devices under 0000:31:00.0: cvl_0_0 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:30.048 Found net devices under 0000:31:00.1: cvl_0_1 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:30.048 ************************************ 00:21:30.048 START TEST nvmf_perf_adq 00:21:30.048 ************************************ 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:30.048 * Looking for test storage... 00:21:30.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:30.048 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:30.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.048 --rc genhtml_branch_coverage=1 00:21:30.048 --rc genhtml_function_coverage=1 00:21:30.048 --rc genhtml_legend=1 00:21:30.048 --rc geninfo_all_blocks=1 00:21:30.048 --rc geninfo_unexecuted_blocks=1 00:21:30.048 00:21:30.048 ' 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:30.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.048 --rc genhtml_branch_coverage=1 00:21:30.048 --rc genhtml_function_coverage=1 00:21:30.048 --rc genhtml_legend=1 00:21:30.048 --rc geninfo_all_blocks=1 00:21:30.048 --rc geninfo_unexecuted_blocks=1 00:21:30.048 00:21:30.048 ' 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:30.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.048 --rc genhtml_branch_coverage=1 00:21:30.048 --rc genhtml_function_coverage=1 00:21:30.048 --rc genhtml_legend=1 00:21:30.048 --rc geninfo_all_blocks=1 00:21:30.048 --rc geninfo_unexecuted_blocks=1 00:21:30.048 00:21:30.048 ' 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:30.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.048 --rc genhtml_branch_coverage=1 00:21:30.048 --rc genhtml_function_coverage=1 00:21:30.048 --rc genhtml_legend=1 00:21:30.048 --rc geninfo_all_blocks=1 00:21:30.048 --rc geninfo_unexecuted_blocks=1 00:21:30.048 00:21:30.048 ' 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.048 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:30.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:30.049 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:38.319 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:38.319 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:38.319 Found net devices under 0000:31:00.0: cvl_0_0 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:38.319 Found net devices under 0000:31:00.1: cvl_0_1 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:38.319 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:39.703 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:42.246 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:47.534 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:47.535 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:47.535 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:47.535 Found net devices under 0000:31:00.0: cvl_0_0 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:47.535 Found net devices under 0000:31:00.1: cvl_0_1 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.535 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:47.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:21:47.535 00:21:47.535 --- 10.0.0.2 ping statistics --- 00:21:47.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.535 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:21:47.535 00:21:47.535 --- 10.0.0.1 ping statistics --- 00:21:47.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.535 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3480278 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3480278 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3480278 ']' 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.535 11:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.535 [2024-12-06 11:20:53.310681] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:21:47.535 [2024-12-06 11:20:53.310782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.535 [2024-12-06 11:20:53.403806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.535 [2024-12-06 11:20:53.447158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.535 [2024-12-06 11:20:53.447195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.535 [2024-12-06 11:20:53.447203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.535 [2024-12-06 11:20:53.447210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.535 [2024-12-06 11:20:53.447216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.535 [2024-12-06 11:20:53.449081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.535 [2024-12-06 11:20:53.449098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.536 [2024-12-06 11:20:53.449233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.536 [2024-12-06 11:20:53.449233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.108 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.369 [2024-12-06 11:20:54.296168] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.369 Malloc1 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.369 [2024-12-06 11:20:54.364355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3480630 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:48.369 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:50.281 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:50.281 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.281 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.281 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.281 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:50.281 "tick_rate": 2400000000, 00:21:50.281 "poll_groups": [ 00:21:50.281 { 00:21:50.281 "name": "nvmf_tgt_poll_group_000", 00:21:50.281 "admin_qpairs": 1, 00:21:50.281 "io_qpairs": 1, 00:21:50.281 "current_admin_qpairs": 1, 00:21:50.281 "current_io_qpairs": 1, 00:21:50.281 "pending_bdev_io": 0, 00:21:50.281 "completed_nvme_io": 20488, 00:21:50.281 "transports": [ 00:21:50.281 { 00:21:50.281 "trtype": "TCP" 00:21:50.281 } 00:21:50.281 ] 00:21:50.281 }, 00:21:50.281 { 00:21:50.281 "name": "nvmf_tgt_poll_group_001", 00:21:50.281 "admin_qpairs": 0, 00:21:50.281 "io_qpairs": 1, 00:21:50.281 "current_admin_qpairs": 0, 00:21:50.281 "current_io_qpairs": 1, 00:21:50.281 "pending_bdev_io": 0, 00:21:50.281 "completed_nvme_io": 28954, 00:21:50.281 "transports": [ 00:21:50.281 { 00:21:50.281 "trtype": "TCP" 00:21:50.281 } 00:21:50.281 ] 00:21:50.281 }, 00:21:50.281 { 00:21:50.281 "name": "nvmf_tgt_poll_group_002", 00:21:50.281 "admin_qpairs": 0, 00:21:50.281 "io_qpairs": 1, 00:21:50.281 "current_admin_qpairs": 0, 00:21:50.281 "current_io_qpairs": 1, 00:21:50.281 "pending_bdev_io": 0, 00:21:50.281 "completed_nvme_io": 21059, 00:21:50.281 "transports": [ 00:21:50.281 { 00:21:50.281 "trtype": "TCP" 00:21:50.281 } 00:21:50.281 ] 00:21:50.281 }, 00:21:50.281 { 00:21:50.281 "name": "nvmf_tgt_poll_group_003", 00:21:50.281 "admin_qpairs": 0, 00:21:50.281 "io_qpairs": 1, 00:21:50.281 "current_admin_qpairs": 0, 00:21:50.281 "current_io_qpairs": 1, 00:21:50.281 "pending_bdev_io": 0, 00:21:50.281 "completed_nvme_io": 20515, 00:21:50.281 "transports": [ 00:21:50.281 { 00:21:50.281 "trtype": "TCP" 00:21:50.281 } 00:21:50.281 ] 00:21:50.281 } 00:21:50.281 ] 00:21:50.281 }' 00:21:50.281 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:50.281 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:50.542 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:50.542 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:50.542 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3480630 00:21:58.679 Initializing NVMe Controllers 00:21:58.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:58.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:58.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:58.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:58.679 Initialization complete. Launching workers. 00:21:58.679 ======================================================== 00:21:58.679 Latency(us) 00:21:58.679 Device Information : IOPS MiB/s Average min max 00:21:58.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13908.10 54.33 4602.77 1282.96 9675.90 00:21:58.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15270.00 59.65 4191.53 1502.96 8211.97 00:21:58.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14250.70 55.67 4491.42 1104.95 11106.77 00:21:58.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11365.10 44.39 5630.69 1574.56 11173.52 00:21:58.679 ======================================================== 00:21:58.679 Total : 54793.89 214.04 4672.41 1104.95 11173.52 00:21:58.679 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.679 rmmod nvme_tcp 00:21:58.679 rmmod nvme_fabrics 00:21:58.679 rmmod nvme_keyring 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3480278 ']' 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3480278 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3480278 ']' 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3480278 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3480278 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3480278' 00:21:58.679 killing process with pid 3480278 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3480278 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3480278 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.679 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.220 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.220 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:01.220 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:01.220 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:03.130 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:05.043 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:10.335 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:10.335 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:10.335 Found net devices under 0000:31:00.0: cvl_0_0 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:10.335 Found net devices under 0000:31:00.1: cvl_0_1 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.335 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.335 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.335 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.335 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.335 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.335 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.335 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:22:10.336 00:22:10.336 --- 10.0.0.2 ping statistics --- 00:22:10.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.336 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:22:10.336 00:22:10.336 --- 10.0.0.1 ping statistics --- 00:22:10.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.336 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:10.336 net.core.busy_poll = 1 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:10.336 net.core.busy_read = 1 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:10.336 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:10.596 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3485786 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3485786 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3485786 ']' 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.597 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.597 [2024-12-06 11:21:16.676592] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:22:10.597 [2024-12-06 11:21:16.676648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.857 [2024-12-06 11:21:16.768769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.857 [2024-12-06 11:21:16.807671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.857 [2024-12-06 11:21:16.807706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.857 [2024-12-06 11:21:16.807714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.857 [2024-12-06 11:21:16.807721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.857 [2024-12-06 11:21:16.807727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.857 [2024-12-06 11:21:16.809323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.857 [2024-12-06 11:21:16.809439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.857 [2024-12-06 11:21:16.809595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.857 [2024-12-06 11:21:16.809596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.428 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.689 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.689 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:11.689 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.689 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.689 [2024-12-06 11:21:17.664890] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.689 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.689 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:11.689 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.689 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.689 Malloc1 00:22:11.689 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.689 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:11.689 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.690 [2024-12-06 11:21:17.735064] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3486007 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:11.690 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:13.606 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:13.606 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.606 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.606 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.606 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:13.606 "tick_rate": 2400000000, 00:22:13.606 "poll_groups": [ 00:22:13.606 { 00:22:13.606 "name": "nvmf_tgt_poll_group_000", 00:22:13.606 "admin_qpairs": 1, 00:22:13.606 "io_qpairs": 2, 00:22:13.606 "current_admin_qpairs": 1, 00:22:13.606 "current_io_qpairs": 2, 00:22:13.606 "pending_bdev_io": 0, 00:22:13.606 "completed_nvme_io": 29011, 00:22:13.606 "transports": [ 00:22:13.606 { 00:22:13.606 "trtype": "TCP" 00:22:13.606 } 00:22:13.606 ] 00:22:13.606 }, 00:22:13.606 { 00:22:13.606 "name": "nvmf_tgt_poll_group_001", 00:22:13.606 "admin_qpairs": 0, 00:22:13.606 "io_qpairs": 2, 00:22:13.606 "current_admin_qpairs": 0, 00:22:13.606 "current_io_qpairs": 2, 00:22:13.606 "pending_bdev_io": 0, 00:22:13.606 "completed_nvme_io": 40177, 00:22:13.606 "transports": [ 00:22:13.606 { 00:22:13.606 "trtype": "TCP" 00:22:13.606 } 00:22:13.606 ] 00:22:13.606 }, 00:22:13.606 { 00:22:13.606 "name": "nvmf_tgt_poll_group_002", 00:22:13.606 "admin_qpairs": 0, 00:22:13.606 "io_qpairs": 0, 00:22:13.606 "current_admin_qpairs": 0, 00:22:13.606 "current_io_qpairs": 0, 00:22:13.606 "pending_bdev_io": 0, 00:22:13.606 "completed_nvme_io": 0, 00:22:13.606 "transports": [ 00:22:13.606 { 00:22:13.606 "trtype": "TCP" 00:22:13.606 } 00:22:13.606 ] 00:22:13.606 }, 00:22:13.606 { 00:22:13.606 "name": "nvmf_tgt_poll_group_003", 00:22:13.606 "admin_qpairs": 0, 00:22:13.606 "io_qpairs": 0, 00:22:13.606 "current_admin_qpairs": 0, 00:22:13.606 "current_io_qpairs": 0, 00:22:13.606 "pending_bdev_io": 0, 00:22:13.606 "completed_nvme_io": 0, 00:22:13.607 "transports": [ 00:22:13.607 { 00:22:13.607 "trtype": "TCP" 00:22:13.607 } 00:22:13.607 ] 00:22:13.607 } 00:22:13.607 ] 00:22:13.607 }' 00:22:13.607 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:13.607 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:13.867 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:13.868 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:13.868 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3486007 00:22:21.999 Initializing NVMe Controllers 00:22:21.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:21.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:21.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:21.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:21.999 Initialization complete. Launching workers. 00:22:21.999 ======================================================== 00:22:21.999 Latency(us) 00:22:21.999 Device Information : IOPS MiB/s Average min max 00:22:21.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10450.00 40.82 6124.33 1122.81 50521.84 00:22:21.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10357.20 40.46 6179.02 1158.62 49540.94 00:22:21.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10205.70 39.87 6289.39 1094.73 51383.41 00:22:21.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9053.50 35.37 7068.35 1169.39 51368.32 00:22:21.999 ======================================================== 00:22:21.999 Total : 40066.40 156.51 6393.82 1094.73 51383.41 00:22:21.999 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.999 rmmod nvme_tcp 00:22:21.999 rmmod nvme_fabrics 00:22:21.999 rmmod nvme_keyring 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3485786 ']' 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3485786 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3485786 ']' 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3485786 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.999 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3485786 00:22:21.999 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.999 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.999 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3485786' 00:22:21.999 killing process with pid 3485786 00:22:21.999 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3485786 00:22:21.999 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3485786 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.260 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:25.561 00:22:25.561 real 0m55.437s 00:22:25.561 user 2m49.802s 00:22:25.561 sys 0m12.390s 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.561 ************************************ 00:22:25.561 END TEST nvmf_perf_adq 00:22:25.561 ************************************ 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:25.561 ************************************ 00:22:25.561 START TEST nvmf_shutdown 00:22:25.561 ************************************ 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:25.561 * Looking for test storage... 00:22:25.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:25.561 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:25.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.562 --rc genhtml_branch_coverage=1 00:22:25.562 --rc genhtml_function_coverage=1 00:22:25.562 --rc genhtml_legend=1 00:22:25.562 --rc geninfo_all_blocks=1 00:22:25.562 --rc geninfo_unexecuted_blocks=1 00:22:25.562 00:22:25.562 ' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:25.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.562 --rc genhtml_branch_coverage=1 00:22:25.562 --rc genhtml_function_coverage=1 00:22:25.562 --rc genhtml_legend=1 00:22:25.562 --rc geninfo_all_blocks=1 00:22:25.562 --rc geninfo_unexecuted_blocks=1 00:22:25.562 00:22:25.562 ' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:25.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.562 --rc genhtml_branch_coverage=1 00:22:25.562 --rc genhtml_function_coverage=1 00:22:25.562 --rc genhtml_legend=1 00:22:25.562 --rc geninfo_all_blocks=1 00:22:25.562 --rc geninfo_unexecuted_blocks=1 00:22:25.562 00:22:25.562 ' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:25.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.562 --rc genhtml_branch_coverage=1 00:22:25.562 --rc genhtml_function_coverage=1 00:22:25.562 --rc genhtml_legend=1 00:22:25.562 --rc geninfo_all_blocks=1 00:22:25.562 --rc geninfo_unexecuted_blocks=1 00:22:25.562 00:22:25.562 ' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:25.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:25.562 ************************************ 00:22:25.562 START TEST nvmf_shutdown_tc1 00:22:25.562 ************************************ 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:25.562 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.707 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:33.708 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:33.708 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:33.708 Found net devices under 0000:31:00.0: cvl_0_0 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:33.708 Found net devices under 0000:31:00.1: cvl_0_1 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.708 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.969 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.969 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.969 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.969 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.735 ms 00:22:33.969 00:22:33.969 --- 10.0.0.2 ping statistics --- 00:22:33.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.969 rtt min/avg/max/mdev = 0.735/0.735/0.735/0.000 ms 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:22:33.969 00:22:33.969 --- 10.0.0.1 ping statistics --- 00:22:33.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.969 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3493164 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3493164 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3493164 ']' 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.969 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.229 [2024-12-06 11:21:40.150962] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:22:34.229 [2024-12-06 11:21:40.151026] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.229 [2024-12-06 11:21:40.262742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.229 [2024-12-06 11:21:40.314630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.229 [2024-12-06 11:21:40.314683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.229 [2024-12-06 11:21:40.314692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.229 [2024-12-06 11:21:40.314699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.229 [2024-12-06 11:21:40.314706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.229 [2024-12-06 11:21:40.317088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.229 [2024-12-06 11:21:40.317250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.229 [2024-12-06 11:21:40.317570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:34.229 [2024-12-06 11:21:40.317573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.170 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.170 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:35.170 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.170 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.170 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.170 [2024-12-06 11:21:41.015788] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.170 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.170 Malloc1 00:22:35.170 [2024-12-06 11:21:41.132269] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.170 Malloc2 00:22:35.170 Malloc3 00:22:35.170 Malloc4 00:22:35.170 Malloc5 00:22:35.170 Malloc6 00:22:35.432 Malloc7 00:22:35.432 Malloc8 00:22:35.432 Malloc9 00:22:35.432 Malloc10 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3493423 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3493423 /var/tmp/bdevperf.sock 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3493423 ']' 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.432 { 00:22:35.432 "params": { 00:22:35.432 "name": "Nvme$subsystem", 00:22:35.432 "trtype": "$TEST_TRANSPORT", 00:22:35.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.432 "adrfam": "ipv4", 00:22:35.432 "trsvcid": "$NVMF_PORT", 00:22:35.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.432 "hdgst": ${hdgst:-false}, 00:22:35.432 "ddgst": ${ddgst:-false} 00:22:35.432 }, 00:22:35.432 "method": "bdev_nvme_attach_controller" 00:22:35.432 } 00:22:35.432 EOF 00:22:35.432 )") 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.432 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.432 { 00:22:35.432 "params": { 00:22:35.433 "name": "Nvme$subsystem", 00:22:35.433 "trtype": "$TEST_TRANSPORT", 00:22:35.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.433 "adrfam": "ipv4", 00:22:35.433 "trsvcid": "$NVMF_PORT", 00:22:35.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.433 "hdgst": ${hdgst:-false}, 00:22:35.433 "ddgst": ${ddgst:-false} 00:22:35.433 }, 00:22:35.433 "method": "bdev_nvme_attach_controller" 00:22:35.433 } 00:22:35.433 EOF 00:22:35.433 )") 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.433 { 00:22:35.433 "params": { 00:22:35.433 "name": "Nvme$subsystem", 00:22:35.433 "trtype": "$TEST_TRANSPORT", 00:22:35.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.433 "adrfam": "ipv4", 00:22:35.433 "trsvcid": "$NVMF_PORT", 00:22:35.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.433 "hdgst": ${hdgst:-false}, 00:22:35.433 "ddgst": ${ddgst:-false} 00:22:35.433 }, 00:22:35.433 "method": "bdev_nvme_attach_controller" 00:22:35.433 } 00:22:35.433 EOF 00:22:35.433 )") 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.433 { 00:22:35.433 "params": { 00:22:35.433 "name": "Nvme$subsystem", 00:22:35.433 "trtype": "$TEST_TRANSPORT", 00:22:35.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.433 "adrfam": "ipv4", 00:22:35.433 "trsvcid": "$NVMF_PORT", 00:22:35.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.433 "hdgst": ${hdgst:-false}, 00:22:35.433 "ddgst": ${ddgst:-false} 00:22:35.433 }, 00:22:35.433 "method": "bdev_nvme_attach_controller" 00:22:35.433 } 00:22:35.433 EOF 00:22:35.433 )") 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.433 { 00:22:35.433 "params": { 00:22:35.433 "name": "Nvme$subsystem", 00:22:35.433 "trtype": "$TEST_TRANSPORT", 00:22:35.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.433 "adrfam": "ipv4", 00:22:35.433 "trsvcid": "$NVMF_PORT", 00:22:35.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.433 "hdgst": ${hdgst:-false}, 00:22:35.433 "ddgst": ${ddgst:-false} 00:22:35.433 }, 00:22:35.433 "method": "bdev_nvme_attach_controller" 00:22:35.433 } 00:22:35.433 EOF 00:22:35.433 )") 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.433 { 00:22:35.433 "params": { 00:22:35.433 "name": "Nvme$subsystem", 00:22:35.433 "trtype": "$TEST_TRANSPORT", 00:22:35.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.433 "adrfam": "ipv4", 00:22:35.433 "trsvcid": "$NVMF_PORT", 00:22:35.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.433 "hdgst": ${hdgst:-false}, 00:22:35.433 "ddgst": ${ddgst:-false} 00:22:35.433 }, 00:22:35.433 "method": "bdev_nvme_attach_controller" 00:22:35.433 } 00:22:35.433 EOF 00:22:35.433 )") 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.433 [2024-12-06 11:21:41.590725] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:22:35.433 [2024-12-06 11:21:41.590778] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.433 { 00:22:35.433 "params": { 00:22:35.433 "name": "Nvme$subsystem", 00:22:35.433 "trtype": "$TEST_TRANSPORT", 00:22:35.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.433 "adrfam": "ipv4", 00:22:35.433 "trsvcid": "$NVMF_PORT", 00:22:35.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.433 "hdgst": ${hdgst:-false}, 00:22:35.433 "ddgst": ${ddgst:-false} 00:22:35.433 }, 00:22:35.433 "method": "bdev_nvme_attach_controller" 00:22:35.433 } 00:22:35.433 EOF 00:22:35.433 )") 00:22:35.433 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.694 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.694 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.694 { 00:22:35.694 "params": { 00:22:35.694 "name": "Nvme$subsystem", 00:22:35.694 "trtype": "$TEST_TRANSPORT", 00:22:35.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.694 "adrfam": "ipv4", 00:22:35.694 "trsvcid": "$NVMF_PORT", 00:22:35.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.694 "hdgst": ${hdgst:-false}, 00:22:35.694 "ddgst": ${ddgst:-false} 00:22:35.694 }, 00:22:35.694 "method": "bdev_nvme_attach_controller" 00:22:35.694 } 00:22:35.694 EOF 00:22:35.694 )") 00:22:35.694 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.694 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.694 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.694 { 00:22:35.694 "params": { 00:22:35.694 "name": "Nvme$subsystem", 00:22:35.694 "trtype": "$TEST_TRANSPORT", 00:22:35.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.694 "adrfam": "ipv4", 00:22:35.694 "trsvcid": "$NVMF_PORT", 00:22:35.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.694 "hdgst": ${hdgst:-false}, 00:22:35.694 "ddgst": ${ddgst:-false} 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 } 00:22:35.695 EOF 00:22:35.695 )") 00:22:35.695 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.695 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.695 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.695 { 00:22:35.695 "params": { 00:22:35.695 "name": "Nvme$subsystem", 00:22:35.695 "trtype": "$TEST_TRANSPORT", 00:22:35.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.695 "adrfam": "ipv4", 00:22:35.695 "trsvcid": "$NVMF_PORT", 00:22:35.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.695 "hdgst": ${hdgst:-false}, 00:22:35.695 "ddgst": ${ddgst:-false} 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 } 00:22:35.695 EOF 00:22:35.695 )") 00:22:35.695 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.695 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:35.695 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:35.695 11:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:35.695 "params": { 00:22:35.695 "name": "Nvme1", 00:22:35.695 "trtype": "tcp", 00:22:35.695 "traddr": "10.0.0.2", 00:22:35.695 "adrfam": "ipv4", 00:22:35.695 "trsvcid": "4420", 00:22:35.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.695 "hdgst": false, 00:22:35.695 "ddgst": false 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 },{ 00:22:35.695 "params": { 00:22:35.695 "name": "Nvme2", 00:22:35.695 "trtype": "tcp", 00:22:35.695 "traddr": "10.0.0.2", 00:22:35.695 "adrfam": "ipv4", 00:22:35.695 "trsvcid": "4420", 00:22:35.695 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:35.695 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:35.695 "hdgst": false, 00:22:35.695 "ddgst": false 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 },{ 00:22:35.695 "params": { 00:22:35.695 "name": "Nvme3", 00:22:35.695 "trtype": "tcp", 00:22:35.695 "traddr": "10.0.0.2", 00:22:35.695 "adrfam": "ipv4", 00:22:35.695 "trsvcid": "4420", 00:22:35.695 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:35.695 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:35.695 "hdgst": false, 00:22:35.695 "ddgst": false 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 },{ 00:22:35.695 "params": { 00:22:35.695 "name": "Nvme4", 00:22:35.695 "trtype": "tcp", 00:22:35.695 "traddr": "10.0.0.2", 00:22:35.695 "adrfam": "ipv4", 00:22:35.695 "trsvcid": "4420", 00:22:35.695 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:35.695 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:35.695 "hdgst": false, 00:22:35.695 "ddgst": false 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 },{ 00:22:35.695 "params": { 00:22:35.695 "name": "Nvme5", 00:22:35.695 "trtype": "tcp", 00:22:35.695 "traddr": "10.0.0.2", 00:22:35.695 "adrfam": "ipv4", 00:22:35.695 "trsvcid": "4420", 00:22:35.695 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:35.695 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:35.695 "hdgst": false, 00:22:35.695 "ddgst": false 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 },{ 00:22:35.695 "params": { 00:22:35.695 "name": "Nvme6", 00:22:35.695 "trtype": "tcp", 00:22:35.695 "traddr": "10.0.0.2", 00:22:35.695 "adrfam": "ipv4", 00:22:35.695 "trsvcid": "4420", 00:22:35.695 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:35.695 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:35.695 "hdgst": false, 00:22:35.695 "ddgst": false 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 },{ 00:22:35.695 "params": { 00:22:35.695 "name": "Nvme7", 00:22:35.695 "trtype": "tcp", 00:22:35.695 "traddr": "10.0.0.2", 00:22:35.695 "adrfam": "ipv4", 00:22:35.695 "trsvcid": "4420", 00:22:35.695 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:35.695 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:35.695 "hdgst": false, 00:22:35.695 "ddgst": false 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 },{ 00:22:35.695 "params": { 00:22:35.695 "name": "Nvme8", 00:22:35.695 "trtype": "tcp", 00:22:35.695 "traddr": "10.0.0.2", 00:22:35.695 "adrfam": "ipv4", 00:22:35.695 "trsvcid": "4420", 00:22:35.695 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:35.695 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:35.695 "hdgst": false, 00:22:35.695 "ddgst": false 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 },{ 00:22:35.695 "params": { 00:22:35.695 "name": "Nvme9", 00:22:35.695 "trtype": "tcp", 00:22:35.695 "traddr": "10.0.0.2", 00:22:35.695 "adrfam": "ipv4", 00:22:35.695 "trsvcid": "4420", 00:22:35.695 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:35.695 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:35.695 "hdgst": false, 00:22:35.695 "ddgst": false 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 },{ 00:22:35.695 "params": { 00:22:35.695 "name": "Nvme10", 00:22:35.695 "trtype": "tcp", 00:22:35.695 "traddr": "10.0.0.2", 00:22:35.695 "adrfam": "ipv4", 00:22:35.695 "trsvcid": "4420", 00:22:35.695 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:35.695 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:35.695 "hdgst": false, 00:22:35.695 "ddgst": false 00:22:35.695 }, 00:22:35.695 "method": "bdev_nvme_attach_controller" 00:22:35.695 }' 00:22:35.695 [2024-12-06 11:21:41.670484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.695 [2024-12-06 11:21:41.707013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.081 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.081 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:37.081 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:37.081 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.081 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:37.081 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.081 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3493423 00:22:37.081 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:37.081 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:38.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3493423 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3493164 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.025 { 00:22:38.025 "params": { 00:22:38.025 "name": "Nvme$subsystem", 00:22:38.025 "trtype": "$TEST_TRANSPORT", 00:22:38.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.025 "adrfam": "ipv4", 00:22:38.025 "trsvcid": "$NVMF_PORT", 00:22:38.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.025 "hdgst": ${hdgst:-false}, 00:22:38.025 "ddgst": ${ddgst:-false} 00:22:38.025 }, 00:22:38.025 "method": "bdev_nvme_attach_controller" 00:22:38.025 } 00:22:38.025 EOF 00:22:38.025 )") 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.025 { 00:22:38.025 "params": { 00:22:38.025 "name": "Nvme$subsystem", 00:22:38.025 "trtype": "$TEST_TRANSPORT", 00:22:38.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.025 "adrfam": "ipv4", 00:22:38.025 "trsvcid": "$NVMF_PORT", 00:22:38.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.025 "hdgst": ${hdgst:-false}, 00:22:38.025 "ddgst": ${ddgst:-false} 00:22:38.025 }, 00:22:38.025 "method": "bdev_nvme_attach_controller" 00:22:38.025 } 00:22:38.025 EOF 00:22:38.025 )") 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.025 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.025 { 00:22:38.025 "params": { 00:22:38.025 "name": "Nvme$subsystem", 00:22:38.025 "trtype": "$TEST_TRANSPORT", 00:22:38.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.025 "adrfam": "ipv4", 00:22:38.025 "trsvcid": "$NVMF_PORT", 00:22:38.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.025 "hdgst": ${hdgst:-false}, 00:22:38.025 "ddgst": ${ddgst:-false} 00:22:38.025 }, 00:22:38.026 "method": "bdev_nvme_attach_controller" 00:22:38.026 } 00:22:38.026 EOF 00:22:38.026 )") 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.026 { 00:22:38.026 "params": { 00:22:38.026 "name": "Nvme$subsystem", 00:22:38.026 "trtype": "$TEST_TRANSPORT", 00:22:38.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.026 "adrfam": "ipv4", 00:22:38.026 "trsvcid": "$NVMF_PORT", 00:22:38.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.026 "hdgst": ${hdgst:-false}, 00:22:38.026 "ddgst": ${ddgst:-false} 00:22:38.026 }, 00:22:38.026 "method": "bdev_nvme_attach_controller" 00:22:38.026 } 00:22:38.026 EOF 00:22:38.026 )") 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.026 { 00:22:38.026 "params": { 00:22:38.026 "name": "Nvme$subsystem", 00:22:38.026 "trtype": "$TEST_TRANSPORT", 00:22:38.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.026 "adrfam": "ipv4", 00:22:38.026 "trsvcid": "$NVMF_PORT", 00:22:38.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.026 "hdgst": ${hdgst:-false}, 00:22:38.026 "ddgst": ${ddgst:-false} 00:22:38.026 }, 00:22:38.026 "method": "bdev_nvme_attach_controller" 00:22:38.026 } 00:22:38.026 EOF 00:22:38.026 )") 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.026 [2024-12-06 11:21:44.146033] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:22:38.026 [2024-12-06 11:21:44.146088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493923 ] 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.026 { 00:22:38.026 "params": { 00:22:38.026 "name": "Nvme$subsystem", 00:22:38.026 "trtype": "$TEST_TRANSPORT", 00:22:38.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.026 "adrfam": "ipv4", 00:22:38.026 "trsvcid": "$NVMF_PORT", 00:22:38.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.026 "hdgst": ${hdgst:-false}, 00:22:38.026 "ddgst": ${ddgst:-false} 00:22:38.026 }, 00:22:38.026 "method": "bdev_nvme_attach_controller" 00:22:38.026 } 00:22:38.026 EOF 00:22:38.026 )") 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.026 { 00:22:38.026 "params": { 00:22:38.026 "name": "Nvme$subsystem", 00:22:38.026 "trtype": "$TEST_TRANSPORT", 00:22:38.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.026 "adrfam": "ipv4", 00:22:38.026 "trsvcid": "$NVMF_PORT", 00:22:38.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.026 "hdgst": ${hdgst:-false}, 00:22:38.026 "ddgst": ${ddgst:-false} 00:22:38.026 }, 00:22:38.026 "method": "bdev_nvme_attach_controller" 00:22:38.026 } 00:22:38.026 EOF 00:22:38.026 )") 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.026 { 00:22:38.026 "params": { 00:22:38.026 "name": "Nvme$subsystem", 00:22:38.026 "trtype": "$TEST_TRANSPORT", 00:22:38.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.026 "adrfam": "ipv4", 00:22:38.026 "trsvcid": "$NVMF_PORT", 00:22:38.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.026 "hdgst": ${hdgst:-false}, 00:22:38.026 "ddgst": ${ddgst:-false} 00:22:38.026 }, 00:22:38.026 "method": "bdev_nvme_attach_controller" 00:22:38.026 } 00:22:38.026 EOF 00:22:38.026 )") 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.026 { 00:22:38.026 "params": { 00:22:38.026 "name": "Nvme$subsystem", 00:22:38.026 "trtype": "$TEST_TRANSPORT", 00:22:38.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.026 "adrfam": "ipv4", 00:22:38.026 "trsvcid": "$NVMF_PORT", 00:22:38.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.026 "hdgst": ${hdgst:-false}, 00:22:38.026 "ddgst": ${ddgst:-false} 00:22:38.026 }, 00:22:38.026 "method": "bdev_nvme_attach_controller" 00:22:38.026 } 00:22:38.026 EOF 00:22:38.026 )") 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.026 { 00:22:38.026 "params": { 00:22:38.026 "name": "Nvme$subsystem", 00:22:38.026 "trtype": "$TEST_TRANSPORT", 00:22:38.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.026 "adrfam": "ipv4", 00:22:38.026 "trsvcid": "$NVMF_PORT", 00:22:38.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.026 "hdgst": ${hdgst:-false}, 00:22:38.026 "ddgst": ${ddgst:-false} 00:22:38.026 }, 00:22:38.026 "method": "bdev_nvme_attach_controller" 00:22:38.026 } 00:22:38.026 EOF 00:22:38.026 )") 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.026 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:38.287 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:38.287 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:38.287 "params": { 00:22:38.287 "name": "Nvme1", 00:22:38.287 "trtype": "tcp", 00:22:38.287 "traddr": "10.0.0.2", 00:22:38.287 "adrfam": "ipv4", 00:22:38.287 "trsvcid": "4420", 00:22:38.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.287 "hdgst": false, 00:22:38.287 "ddgst": false 00:22:38.287 }, 00:22:38.287 "method": "bdev_nvme_attach_controller" 00:22:38.287 },{ 00:22:38.287 "params": { 00:22:38.287 "name": "Nvme2", 00:22:38.287 "trtype": "tcp", 00:22:38.287 "traddr": "10.0.0.2", 00:22:38.287 "adrfam": "ipv4", 00:22:38.287 "trsvcid": "4420", 00:22:38.287 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.287 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:38.287 "hdgst": false, 00:22:38.287 "ddgst": false 00:22:38.287 }, 00:22:38.287 "method": "bdev_nvme_attach_controller" 00:22:38.287 },{ 00:22:38.287 "params": { 00:22:38.287 "name": "Nvme3", 00:22:38.287 "trtype": "tcp", 00:22:38.287 "traddr": "10.0.0.2", 00:22:38.287 "adrfam": "ipv4", 00:22:38.287 "trsvcid": "4420", 00:22:38.287 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:38.287 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:38.287 "hdgst": false, 00:22:38.287 "ddgst": false 00:22:38.287 }, 00:22:38.287 "method": "bdev_nvme_attach_controller" 00:22:38.287 },{ 00:22:38.287 "params": { 00:22:38.287 "name": "Nvme4", 00:22:38.287 "trtype": "tcp", 00:22:38.287 "traddr": "10.0.0.2", 00:22:38.287 "adrfam": "ipv4", 00:22:38.287 "trsvcid": "4420", 00:22:38.287 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:38.287 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:38.287 "hdgst": false, 00:22:38.287 "ddgst": false 00:22:38.287 }, 00:22:38.287 "method": "bdev_nvme_attach_controller" 00:22:38.287 },{ 00:22:38.287 "params": { 00:22:38.287 "name": "Nvme5", 00:22:38.287 "trtype": "tcp", 00:22:38.287 "traddr": "10.0.0.2", 00:22:38.287 "adrfam": "ipv4", 00:22:38.287 "trsvcid": "4420", 00:22:38.287 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:38.287 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:38.287 "hdgst": false, 00:22:38.287 "ddgst": false 00:22:38.287 }, 00:22:38.287 "method": "bdev_nvme_attach_controller" 00:22:38.287 },{ 00:22:38.287 "params": { 00:22:38.287 "name": "Nvme6", 00:22:38.287 "trtype": "tcp", 00:22:38.287 "traddr": "10.0.0.2", 00:22:38.287 "adrfam": "ipv4", 00:22:38.287 "trsvcid": "4420", 00:22:38.287 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:38.287 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:38.287 "hdgst": false, 00:22:38.287 "ddgst": false 00:22:38.287 }, 00:22:38.287 "method": "bdev_nvme_attach_controller" 00:22:38.287 },{ 00:22:38.287 "params": { 00:22:38.287 "name": "Nvme7", 00:22:38.287 "trtype": "tcp", 00:22:38.287 "traddr": "10.0.0.2", 00:22:38.287 "adrfam": "ipv4", 00:22:38.287 "trsvcid": "4420", 00:22:38.287 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:38.287 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:38.287 "hdgst": false, 00:22:38.287 "ddgst": false 00:22:38.287 }, 00:22:38.287 "method": "bdev_nvme_attach_controller" 00:22:38.287 },{ 00:22:38.287 "params": { 00:22:38.287 "name": "Nvme8", 00:22:38.287 "trtype": "tcp", 00:22:38.287 "traddr": "10.0.0.2", 00:22:38.287 "adrfam": "ipv4", 00:22:38.287 "trsvcid": "4420", 00:22:38.287 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:38.287 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:38.287 "hdgst": false, 00:22:38.287 "ddgst": false 00:22:38.287 }, 00:22:38.287 "method": "bdev_nvme_attach_controller" 00:22:38.287 },{ 00:22:38.287 "params": { 00:22:38.287 "name": "Nvme9", 00:22:38.287 "trtype": "tcp", 00:22:38.287 "traddr": "10.0.0.2", 00:22:38.287 "adrfam": "ipv4", 00:22:38.287 "trsvcid": "4420", 00:22:38.287 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:38.287 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:38.287 "hdgst": false, 00:22:38.287 "ddgst": false 00:22:38.287 }, 00:22:38.287 "method": "bdev_nvme_attach_controller" 00:22:38.287 },{ 00:22:38.287 "params": { 00:22:38.287 "name": "Nvme10", 00:22:38.287 "trtype": "tcp", 00:22:38.287 "traddr": "10.0.0.2", 00:22:38.287 "adrfam": "ipv4", 00:22:38.287 "trsvcid": "4420", 00:22:38.287 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:38.287 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:38.287 "hdgst": false, 00:22:38.287 "ddgst": false 00:22:38.287 }, 00:22:38.287 "method": "bdev_nvme_attach_controller" 00:22:38.287 }' 00:22:38.287 [2024-12-06 11:21:44.225948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.287 [2024-12-06 11:21:44.261920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.670 Running I/O for 1 seconds... 00:22:40.611 1862.00 IOPS, 116.38 MiB/s 00:22:40.611 Latency(us) 00:22:40.611 [2024-12-06T10:21:46.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.611 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.611 Verification LBA range: start 0x0 length 0x400 00:22:40.611 Nvme1n1 : 1.04 184.51 11.53 0.00 0.00 343226.60 41287.68 267386.88 00:22:40.611 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.611 Verification LBA range: start 0x0 length 0x400 00:22:40.611 Nvme2n1 : 1.12 227.83 14.24 0.00 0.00 272637.01 18896.21 248162.99 00:22:40.611 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.611 Verification LBA range: start 0x0 length 0x400 00:22:40.611 Nvme3n1 : 1.04 245.31 15.33 0.00 0.00 248623.36 20425.39 249910.61 00:22:40.611 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.611 Verification LBA range: start 0x0 length 0x400 00:22:40.611 Nvme4n1 : 1.14 225.30 14.08 0.00 0.00 266928.43 17257.81 241172.48 00:22:40.611 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.611 Verification LBA range: start 0x0 length 0x400 00:22:40.611 Nvme5n1 : 1.14 223.96 14.00 0.00 0.00 263921.49 15400.96 256901.12 00:22:40.611 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.611 Verification LBA range: start 0x0 length 0x400 00:22:40.611 Nvme6n1 : 1.14 230.31 14.39 0.00 0.00 250649.25 5597.87 255153.49 00:22:40.611 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.611 Verification LBA range: start 0x0 length 0x400 00:22:40.611 Nvme7n1 : 1.18 270.49 16.91 0.00 0.00 211463.85 15182.51 267386.88 00:22:40.611 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.611 Verification LBA range: start 0x0 length 0x400 00:22:40.611 Nvme8n1 : 1.17 272.58 17.04 0.00 0.00 205945.60 12506.45 249910.61 00:22:40.611 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.611 Verification LBA range: start 0x0 length 0x400 00:22:40.611 Nvme9n1 : 1.18 276.93 17.31 0.00 0.00 198043.55 1160.53 244667.73 00:22:40.611 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.611 Verification LBA range: start 0x0 length 0x400 00:22:40.611 Nvme10n1 : 1.20 267.05 16.69 0.00 0.00 203159.81 10922.67 267386.88 00:22:40.611 [2024-12-06T10:21:46.778Z] =================================================================================================================== 00:22:40.611 [2024-12-06T10:21:46.778Z] Total : 2424.27 151.52 0.00 0.00 240251.16 1160.53 267386.88 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.873 rmmod nvme_tcp 00:22:40.873 rmmod nvme_fabrics 00:22:40.873 rmmod nvme_keyring 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3493164 ']' 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3493164 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3493164 ']' 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3493164 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.873 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3493164 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3493164' 00:22:41.134 killing process with pid 3493164 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3493164 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3493164 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.134 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.769 00:22:43.769 real 0m17.732s 00:22:43.769 user 0m34.123s 00:22:43.769 sys 0m7.519s 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.769 ************************************ 00:22:43.769 END TEST nvmf_shutdown_tc1 00:22:43.769 ************************************ 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:43.769 ************************************ 00:22:43.769 START TEST nvmf_shutdown_tc2 00:22:43.769 ************************************ 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:43.769 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:43.769 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.769 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:43.770 Found net devices under 0000:31:00.0: cvl_0_0 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:43.770 Found net devices under 0000:31:00.1: cvl_0_1 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:22:43.770 00:22:43.770 --- 10.0.0.2 ping statistics --- 00:22:43.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.770 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:22:43.770 00:22:43.770 --- 10.0.0.1 ping statistics --- 00:22:43.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.770 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3495040 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3495040 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3495040 ']' 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.770 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.770 [2024-12-06 11:21:49.857109] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:22:43.770 [2024-12-06 11:21:49.857160] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.770 [2024-12-06 11:21:49.932972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.031 [2024-12-06 11:21:49.964102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.031 [2024-12-06 11:21:49.964131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.032 [2024-12-06 11:21:49.964136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.032 [2024-12-06 11:21:49.964141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.032 [2024-12-06 11:21:49.964145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.032 [2024-12-06 11:21:49.965393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.032 [2024-12-06 11:21:49.965550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.032 [2024-12-06 11:21:49.965707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.032 [2024-12-06 11:21:49.965709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.604 [2024-12-06 11:21:50.701303] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.604 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.865 Malloc1 00:22:44.865 [2024-12-06 11:21:50.810696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.865 Malloc2 00:22:44.865 Malloc3 00:22:44.865 Malloc4 00:22:44.865 Malloc5 00:22:44.865 Malloc6 00:22:44.865 Malloc7 00:22:45.126 Malloc8 00:22:45.126 Malloc9 00:22:45.126 Malloc10 00:22:45.126 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.126 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:45.126 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.126 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.126 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3495424 00:22:45.126 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3495424 /var/tmp/bdevperf.sock 00:22:45.126 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3495424 ']' 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.127 { 00:22:45.127 "params": { 00:22:45.127 "name": "Nvme$subsystem", 00:22:45.127 "trtype": "$TEST_TRANSPORT", 00:22:45.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.127 "adrfam": "ipv4", 00:22:45.127 "trsvcid": "$NVMF_PORT", 00:22:45.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.127 "hdgst": ${hdgst:-false}, 00:22:45.127 "ddgst": ${ddgst:-false} 00:22:45.127 }, 00:22:45.127 "method": "bdev_nvme_attach_controller" 00:22:45.127 } 00:22:45.127 EOF 00:22:45.127 )") 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.127 { 00:22:45.127 "params": { 00:22:45.127 "name": "Nvme$subsystem", 00:22:45.127 "trtype": "$TEST_TRANSPORT", 00:22:45.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.127 "adrfam": "ipv4", 00:22:45.127 "trsvcid": "$NVMF_PORT", 00:22:45.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.127 "hdgst": ${hdgst:-false}, 00:22:45.127 "ddgst": ${ddgst:-false} 00:22:45.127 }, 00:22:45.127 "method": "bdev_nvme_attach_controller" 00:22:45.127 } 00:22:45.127 EOF 00:22:45.127 )") 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.127 { 00:22:45.127 "params": { 00:22:45.127 "name": "Nvme$subsystem", 00:22:45.127 "trtype": "$TEST_TRANSPORT", 00:22:45.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.127 "adrfam": "ipv4", 00:22:45.127 "trsvcid": "$NVMF_PORT", 00:22:45.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.127 "hdgst": ${hdgst:-false}, 00:22:45.127 "ddgst": ${ddgst:-false} 00:22:45.127 }, 00:22:45.127 "method": "bdev_nvme_attach_controller" 00:22:45.127 } 00:22:45.127 EOF 00:22:45.127 )") 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.127 { 00:22:45.127 "params": { 00:22:45.127 "name": "Nvme$subsystem", 00:22:45.127 "trtype": "$TEST_TRANSPORT", 00:22:45.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.127 "adrfam": "ipv4", 00:22:45.127 "trsvcid": "$NVMF_PORT", 00:22:45.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.127 "hdgst": ${hdgst:-false}, 00:22:45.127 "ddgst": ${ddgst:-false} 00:22:45.127 }, 00:22:45.127 "method": "bdev_nvme_attach_controller" 00:22:45.127 } 00:22:45.127 EOF 00:22:45.127 )") 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.127 { 00:22:45.127 "params": { 00:22:45.127 "name": "Nvme$subsystem", 00:22:45.127 "trtype": "$TEST_TRANSPORT", 00:22:45.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.127 "adrfam": "ipv4", 00:22:45.127 "trsvcid": "$NVMF_PORT", 00:22:45.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.127 "hdgst": ${hdgst:-false}, 00:22:45.127 "ddgst": ${ddgst:-false} 00:22:45.127 }, 00:22:45.127 "method": "bdev_nvme_attach_controller" 00:22:45.127 } 00:22:45.127 EOF 00:22:45.127 )") 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.127 { 00:22:45.127 "params": { 00:22:45.127 "name": "Nvme$subsystem", 00:22:45.127 "trtype": "$TEST_TRANSPORT", 00:22:45.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.127 "adrfam": "ipv4", 00:22:45.127 "trsvcid": "$NVMF_PORT", 00:22:45.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.127 "hdgst": ${hdgst:-false}, 00:22:45.127 "ddgst": ${ddgst:-false} 00:22:45.127 }, 00:22:45.127 "method": "bdev_nvme_attach_controller" 00:22:45.127 } 00:22:45.127 EOF 00:22:45.127 )") 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.127 [2024-12-06 11:21:51.259725] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:22:45.127 [2024-12-06 11:21:51.259780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495424 ] 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.127 { 00:22:45.127 "params": { 00:22:45.127 "name": "Nvme$subsystem", 00:22:45.127 "trtype": "$TEST_TRANSPORT", 00:22:45.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.127 "adrfam": "ipv4", 00:22:45.127 "trsvcid": "$NVMF_PORT", 00:22:45.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.127 "hdgst": ${hdgst:-false}, 00:22:45.127 "ddgst": ${ddgst:-false} 00:22:45.127 }, 00:22:45.127 "method": "bdev_nvme_attach_controller" 00:22:45.127 } 00:22:45.127 EOF 00:22:45.127 )") 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.127 { 00:22:45.127 "params": { 00:22:45.127 "name": "Nvme$subsystem", 00:22:45.127 "trtype": "$TEST_TRANSPORT", 00:22:45.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.127 "adrfam": "ipv4", 00:22:45.127 "trsvcid": "$NVMF_PORT", 00:22:45.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.127 "hdgst": ${hdgst:-false}, 00:22:45.127 "ddgst": ${ddgst:-false} 00:22:45.127 }, 00:22:45.127 "method": "bdev_nvme_attach_controller" 00:22:45.127 } 00:22:45.127 EOF 00:22:45.127 )") 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.127 { 00:22:45.127 "params": { 00:22:45.127 "name": "Nvme$subsystem", 00:22:45.127 "trtype": "$TEST_TRANSPORT", 00:22:45.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.127 "adrfam": "ipv4", 00:22:45.127 "trsvcid": "$NVMF_PORT", 00:22:45.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.127 "hdgst": ${hdgst:-false}, 00:22:45.127 "ddgst": ${ddgst:-false} 00:22:45.127 }, 00:22:45.127 "method": "bdev_nvme_attach_controller" 00:22:45.127 } 00:22:45.127 EOF 00:22:45.127 )") 00:22:45.127 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.128 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.128 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.128 { 00:22:45.128 "params": { 00:22:45.128 "name": "Nvme$subsystem", 00:22:45.128 "trtype": "$TEST_TRANSPORT", 00:22:45.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.128 "adrfam": "ipv4", 00:22:45.128 "trsvcid": "$NVMF_PORT", 00:22:45.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.128 "hdgst": ${hdgst:-false}, 00:22:45.128 "ddgst": ${ddgst:-false} 00:22:45.128 }, 00:22:45.128 "method": "bdev_nvme_attach_controller" 00:22:45.128 } 00:22:45.128 EOF 00:22:45.128 )") 00:22:45.128 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.128 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:45.389 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:45.389 11:21:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:45.389 "params": { 00:22:45.389 "name": "Nvme1", 00:22:45.389 "trtype": "tcp", 00:22:45.389 "traddr": "10.0.0.2", 00:22:45.389 "adrfam": "ipv4", 00:22:45.389 "trsvcid": "4420", 00:22:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.389 "hdgst": false, 00:22:45.389 "ddgst": false 00:22:45.389 }, 00:22:45.389 "method": "bdev_nvme_attach_controller" 00:22:45.389 },{ 00:22:45.389 "params": { 00:22:45.389 "name": "Nvme2", 00:22:45.389 "trtype": "tcp", 00:22:45.389 "traddr": "10.0.0.2", 00:22:45.389 "adrfam": "ipv4", 00:22:45.389 "trsvcid": "4420", 00:22:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:45.389 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:45.389 "hdgst": false, 00:22:45.389 "ddgst": false 00:22:45.389 }, 00:22:45.389 "method": "bdev_nvme_attach_controller" 00:22:45.389 },{ 00:22:45.389 "params": { 00:22:45.389 "name": "Nvme3", 00:22:45.389 "trtype": "tcp", 00:22:45.389 "traddr": "10.0.0.2", 00:22:45.389 "adrfam": "ipv4", 00:22:45.389 "trsvcid": "4420", 00:22:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:45.389 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:45.389 "hdgst": false, 00:22:45.389 "ddgst": false 00:22:45.389 }, 00:22:45.389 "method": "bdev_nvme_attach_controller" 00:22:45.389 },{ 00:22:45.389 "params": { 00:22:45.389 "name": "Nvme4", 00:22:45.389 "trtype": "tcp", 00:22:45.389 "traddr": "10.0.0.2", 00:22:45.389 "adrfam": "ipv4", 00:22:45.389 "trsvcid": "4420", 00:22:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:45.389 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:45.389 "hdgst": false, 00:22:45.389 "ddgst": false 00:22:45.389 }, 00:22:45.389 "method": "bdev_nvme_attach_controller" 00:22:45.389 },{ 00:22:45.389 "params": { 00:22:45.389 "name": "Nvme5", 00:22:45.389 "trtype": "tcp", 00:22:45.389 "traddr": "10.0.0.2", 00:22:45.389 "adrfam": "ipv4", 00:22:45.389 "trsvcid": "4420", 00:22:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:45.389 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:45.389 "hdgst": false, 00:22:45.389 "ddgst": false 00:22:45.389 }, 00:22:45.390 "method": "bdev_nvme_attach_controller" 00:22:45.390 },{ 00:22:45.390 "params": { 00:22:45.390 "name": "Nvme6", 00:22:45.390 "trtype": "tcp", 00:22:45.390 "traddr": "10.0.0.2", 00:22:45.390 "adrfam": "ipv4", 00:22:45.390 "trsvcid": "4420", 00:22:45.390 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:45.390 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:45.390 "hdgst": false, 00:22:45.390 "ddgst": false 00:22:45.390 }, 00:22:45.390 "method": "bdev_nvme_attach_controller" 00:22:45.390 },{ 00:22:45.390 "params": { 00:22:45.390 "name": "Nvme7", 00:22:45.390 "trtype": "tcp", 00:22:45.390 "traddr": "10.0.0.2", 00:22:45.390 "adrfam": "ipv4", 00:22:45.390 "trsvcid": "4420", 00:22:45.390 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:45.390 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:45.390 "hdgst": false, 00:22:45.390 "ddgst": false 00:22:45.390 }, 00:22:45.390 "method": "bdev_nvme_attach_controller" 00:22:45.390 },{ 00:22:45.390 "params": { 00:22:45.390 "name": "Nvme8", 00:22:45.390 "trtype": "tcp", 00:22:45.390 "traddr": "10.0.0.2", 00:22:45.390 "adrfam": "ipv4", 00:22:45.390 "trsvcid": "4420", 00:22:45.390 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:45.390 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:45.390 "hdgst": false, 00:22:45.390 "ddgst": false 00:22:45.390 }, 00:22:45.390 "method": "bdev_nvme_attach_controller" 00:22:45.390 },{ 00:22:45.390 "params": { 00:22:45.390 "name": "Nvme9", 00:22:45.390 "trtype": "tcp", 00:22:45.390 "traddr": "10.0.0.2", 00:22:45.390 "adrfam": "ipv4", 00:22:45.390 "trsvcid": "4420", 00:22:45.390 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:45.390 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:45.390 "hdgst": false, 00:22:45.390 "ddgst": false 00:22:45.390 }, 00:22:45.390 "method": "bdev_nvme_attach_controller" 00:22:45.390 },{ 00:22:45.390 "params": { 00:22:45.390 "name": "Nvme10", 00:22:45.390 "trtype": "tcp", 00:22:45.390 "traddr": "10.0.0.2", 00:22:45.390 "adrfam": "ipv4", 00:22:45.390 "trsvcid": "4420", 00:22:45.390 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:45.390 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:45.390 "hdgst": false, 00:22:45.390 "ddgst": false 00:22:45.390 }, 00:22:45.390 "method": "bdev_nvme_attach_controller" 00:22:45.390 }' 00:22:45.390 [2024-12-06 11:21:51.338531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.390 [2024-12-06 11:21:51.375215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.776 Running I/O for 10 seconds... 00:22:46.776 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.776 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:46.776 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:46.776 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.776 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.037 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.037 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:47.037 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:47.037 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:47.298 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:47.298 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:47.298 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.298 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.298 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.298 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.298 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.298 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:47.299 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:47.299 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3495424 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3495424 ']' 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3495424 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3495424 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3495424' 00:22:47.559 killing process with pid 3495424 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3495424 00:22:47.559 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3495424 00:22:47.820 Received shutdown signal, test time was about 0.984299 seconds 00:22:47.820 00:22:47.820 Latency(us) 00:22:47.820 [2024-12-06T10:21:53.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.820 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.820 Verification LBA range: start 0x0 length 0x400 00:22:47.820 Nvme1n1 : 0.95 201.15 12.57 0.00 0.00 314412.37 15182.51 256901.12 00:22:47.820 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.820 Verification LBA range: start 0x0 length 0x400 00:22:47.820 Nvme2n1 : 0.95 202.50 12.66 0.00 0.00 305892.69 30801.92 241172.48 00:22:47.820 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.820 Verification LBA range: start 0x0 length 0x400 00:22:47.820 Nvme3n1 : 0.97 267.01 16.69 0.00 0.00 226476.45 3713.71 244667.73 00:22:47.820 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.820 Verification LBA range: start 0x0 length 0x400 00:22:47.820 Nvme4n1 : 0.96 267.04 16.69 0.00 0.00 222304.75 10158.08 248162.99 00:22:47.820 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.820 Verification LBA range: start 0x0 length 0x400 00:22:47.820 Nvme5n1 : 0.98 261.51 16.34 0.00 0.00 222434.13 18568.53 251658.24 00:22:47.820 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.820 Verification LBA range: start 0x0 length 0x400 00:22:47.820 Nvme6n1 : 0.98 273.74 17.11 0.00 0.00 206504.16 7809.71 234181.97 00:22:47.820 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.820 Verification LBA range: start 0x0 length 0x400 00:22:47.820 Nvme7n1 : 0.98 260.32 16.27 0.00 0.00 213529.17 14527.15 249910.61 00:22:47.820 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.820 Verification LBA range: start 0x0 length 0x400 00:22:47.820 Nvme8n1 : 0.97 263.37 16.46 0.00 0.00 206193.92 18022.40 253405.87 00:22:47.820 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.820 Verification LBA range: start 0x0 length 0x400 00:22:47.820 Nvme9n1 : 0.96 199.15 12.45 0.00 0.00 265959.54 39540.05 248162.99 00:22:47.820 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.820 Verification LBA range: start 0x0 length 0x400 00:22:47.820 Nvme10n1 : 0.97 199.95 12.50 0.00 0.00 257327.38 3181.23 270882.13 00:22:47.820 [2024-12-06T10:21:53.987Z] =================================================================================================================== 00:22:47.820 [2024-12-06T10:21:53.987Z] Total : 2395.75 149.73 0.00 0.00 239288.68 3181.23 270882.13 00:22:47.820 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:48.763 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3495040 00:22:48.763 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:48.763 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:48.763 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:48.763 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:48.763 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:48.764 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.764 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:48.764 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.764 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:48.764 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.764 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.764 rmmod nvme_tcp 00:22:49.024 rmmod nvme_fabrics 00:22:49.024 rmmod nvme_keyring 00:22:49.024 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.024 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:49.024 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:49.024 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3495040 ']' 00:22:49.024 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3495040 00:22:49.024 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3495040 ']' 00:22:49.025 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3495040 00:22:49.025 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:49.025 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.025 11:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3495040 00:22:49.025 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:49.025 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:49.025 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3495040' 00:22:49.025 killing process with pid 3495040 00:22:49.025 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3495040 00:22:49.025 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3495040 00:22:49.285 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:49.285 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.285 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.285 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:49.285 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:49.286 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.286 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.286 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.286 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:49.286 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.286 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.286 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.199 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:51.199 00:22:51.199 real 0m7.926s 00:22:51.199 user 0m24.090s 00:22:51.199 sys 0m1.281s 00:22:51.199 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.199 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.199 ************************************ 00:22:51.199 END TEST nvmf_shutdown_tc2 00:22:51.199 ************************************ 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:51.461 ************************************ 00:22:51.461 START TEST nvmf_shutdown_tc3 00:22:51.461 ************************************ 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.461 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:51.462 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:51.462 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:51.462 Found net devices under 0000:31:00.0: cvl_0_0 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:51.462 Found net devices under 0000:31:00.1: cvl_0_1 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.462 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.723 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.723 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.723 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.723 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.723 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.723 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.723 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.723 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:22:51.723 00:22:51.723 --- 10.0.0.2 ping statistics --- 00:22:51.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.723 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:22:51.723 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:22:51.723 00:22:51.724 --- 10.0.0.1 ping statistics --- 00:22:51.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.724 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3496889 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3496889 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3496889 ']' 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.724 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.983 [2024-12-06 11:21:57.908351] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:22:51.984 [2024-12-06 11:21:57.908416] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.984 [2024-12-06 11:21:58.009021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.984 [2024-12-06 11:21:58.042925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.984 [2024-12-06 11:21:58.042957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.984 [2024-12-06 11:21:58.042962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.984 [2024-12-06 11:21:58.042967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.984 [2024-12-06 11:21:58.042972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.984 [2024-12-06 11:21:58.044542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.984 [2024-12-06 11:21:58.044705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.984 [2024-12-06 11:21:58.045021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:51.984 [2024-12-06 11:21:58.045113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.555 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.555 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:52.555 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.555 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.555 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.816 [2024-12-06 11:21:58.756643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.816 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.816 Malloc1 00:22:52.816 [2024-12-06 11:21:58.866756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.816 Malloc2 00:22:52.816 Malloc3 00:22:52.816 Malloc4 00:22:53.077 Malloc5 00:22:53.077 Malloc6 00:22:53.077 Malloc7 00:22:53.077 Malloc8 00:22:53.077 Malloc9 00:22:53.077 Malloc10 00:22:53.077 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.077 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:53.077 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.077 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3497221 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3497221 /var/tmp/bdevperf.sock 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3497221 ']' 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.339 { 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme$subsystem", 00:22:53.339 "trtype": "$TEST_TRANSPORT", 00:22:53.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "$NVMF_PORT", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.339 "hdgst": ${hdgst:-false}, 00:22:53.339 "ddgst": ${ddgst:-false} 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 } 00:22:53.339 EOF 00:22:53.339 )") 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.339 { 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme$subsystem", 00:22:53.339 "trtype": "$TEST_TRANSPORT", 00:22:53.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "$NVMF_PORT", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.339 "hdgst": ${hdgst:-false}, 00:22:53.339 "ddgst": ${ddgst:-false} 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 } 00:22:53.339 EOF 00:22:53.339 )") 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.339 { 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme$subsystem", 00:22:53.339 "trtype": "$TEST_TRANSPORT", 00:22:53.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "$NVMF_PORT", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.339 "hdgst": ${hdgst:-false}, 00:22:53.339 "ddgst": ${ddgst:-false} 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 } 00:22:53.339 EOF 00:22:53.339 )") 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.339 { 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme$subsystem", 00:22:53.339 "trtype": "$TEST_TRANSPORT", 00:22:53.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "$NVMF_PORT", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.339 "hdgst": ${hdgst:-false}, 00:22:53.339 "ddgst": ${ddgst:-false} 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 } 00:22:53.339 EOF 00:22:53.339 )") 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.339 { 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme$subsystem", 00:22:53.339 "trtype": "$TEST_TRANSPORT", 00:22:53.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "$NVMF_PORT", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.339 "hdgst": ${hdgst:-false}, 00:22:53.339 "ddgst": ${ddgst:-false} 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 } 00:22:53.339 EOF 00:22:53.339 )") 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.339 { 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme$subsystem", 00:22:53.339 "trtype": "$TEST_TRANSPORT", 00:22:53.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "$NVMF_PORT", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.339 "hdgst": ${hdgst:-false}, 00:22:53.339 "ddgst": ${ddgst:-false} 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 } 00:22:53.339 EOF 00:22:53.339 )") 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.339 [2024-12-06 11:21:59.317002] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:22:53.339 [2024-12-06 11:21:59.317057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3497221 ] 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.339 { 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme$subsystem", 00:22:53.339 "trtype": "$TEST_TRANSPORT", 00:22:53.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "$NVMF_PORT", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.339 "hdgst": ${hdgst:-false}, 00:22:53.339 "ddgst": ${ddgst:-false} 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 } 00:22:53.339 EOF 00:22:53.339 )") 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.339 { 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme$subsystem", 00:22:53.339 "trtype": "$TEST_TRANSPORT", 00:22:53.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "$NVMF_PORT", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.339 "hdgst": ${hdgst:-false}, 00:22:53.339 "ddgst": ${ddgst:-false} 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 } 00:22:53.339 EOF 00:22:53.339 )") 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.339 { 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme$subsystem", 00:22:53.339 "trtype": "$TEST_TRANSPORT", 00:22:53.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "$NVMF_PORT", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.339 "hdgst": ${hdgst:-false}, 00:22:53.339 "ddgst": ${ddgst:-false} 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 } 00:22:53.339 EOF 00:22:53.339 )") 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.339 { 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme$subsystem", 00:22:53.339 "trtype": "$TEST_TRANSPORT", 00:22:53.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "$NVMF_PORT", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.339 "hdgst": ${hdgst:-false}, 00:22:53.339 "ddgst": ${ddgst:-false} 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 } 00:22:53.339 EOF 00:22:53.339 )") 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:53.339 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme1", 00:22:53.339 "trtype": "tcp", 00:22:53.339 "traddr": "10.0.0.2", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "4420", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.339 "hdgst": false, 00:22:53.339 "ddgst": false 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 },{ 00:22:53.339 "params": { 00:22:53.339 "name": "Nvme2", 00:22:53.339 "trtype": "tcp", 00:22:53.339 "traddr": "10.0.0.2", 00:22:53.339 "adrfam": "ipv4", 00:22:53.339 "trsvcid": "4420", 00:22:53.339 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:53.339 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:53.339 "hdgst": false, 00:22:53.339 "ddgst": false 00:22:53.339 }, 00:22:53.339 "method": "bdev_nvme_attach_controller" 00:22:53.339 },{ 00:22:53.340 "params": { 00:22:53.340 "name": "Nvme3", 00:22:53.340 "trtype": "tcp", 00:22:53.340 "traddr": "10.0.0.2", 00:22:53.340 "adrfam": "ipv4", 00:22:53.340 "trsvcid": "4420", 00:22:53.340 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:53.340 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:53.340 "hdgst": false, 00:22:53.340 "ddgst": false 00:22:53.340 }, 00:22:53.340 "method": "bdev_nvme_attach_controller" 00:22:53.340 },{ 00:22:53.340 "params": { 00:22:53.340 "name": "Nvme4", 00:22:53.340 "trtype": "tcp", 00:22:53.340 "traddr": "10.0.0.2", 00:22:53.340 "adrfam": "ipv4", 00:22:53.340 "trsvcid": "4420", 00:22:53.340 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:53.340 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:53.340 "hdgst": false, 00:22:53.340 "ddgst": false 00:22:53.340 }, 00:22:53.340 "method": "bdev_nvme_attach_controller" 00:22:53.340 },{ 00:22:53.340 "params": { 00:22:53.340 "name": "Nvme5", 00:22:53.340 "trtype": "tcp", 00:22:53.340 "traddr": "10.0.0.2", 00:22:53.340 "adrfam": "ipv4", 00:22:53.340 "trsvcid": "4420", 00:22:53.340 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:53.340 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:53.340 "hdgst": false, 00:22:53.340 "ddgst": false 00:22:53.340 }, 00:22:53.340 "method": "bdev_nvme_attach_controller" 00:22:53.340 },{ 00:22:53.340 "params": { 00:22:53.340 "name": "Nvme6", 00:22:53.340 "trtype": "tcp", 00:22:53.340 "traddr": "10.0.0.2", 00:22:53.340 "adrfam": "ipv4", 00:22:53.340 "trsvcid": "4420", 00:22:53.340 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:53.340 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:53.340 "hdgst": false, 00:22:53.340 "ddgst": false 00:22:53.340 }, 00:22:53.340 "method": "bdev_nvme_attach_controller" 00:22:53.340 },{ 00:22:53.340 "params": { 00:22:53.340 "name": "Nvme7", 00:22:53.340 "trtype": "tcp", 00:22:53.340 "traddr": "10.0.0.2", 00:22:53.340 "adrfam": "ipv4", 00:22:53.340 "trsvcid": "4420", 00:22:53.340 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:53.340 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:53.340 "hdgst": false, 00:22:53.340 "ddgst": false 00:22:53.340 }, 00:22:53.340 "method": "bdev_nvme_attach_controller" 00:22:53.340 },{ 00:22:53.340 "params": { 00:22:53.340 "name": "Nvme8", 00:22:53.340 "trtype": "tcp", 00:22:53.340 "traddr": "10.0.0.2", 00:22:53.340 "adrfam": "ipv4", 00:22:53.340 "trsvcid": "4420", 00:22:53.340 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:53.340 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:53.340 "hdgst": false, 00:22:53.340 "ddgst": false 00:22:53.340 }, 00:22:53.340 "method": "bdev_nvme_attach_controller" 00:22:53.340 },{ 00:22:53.340 "params": { 00:22:53.340 "name": "Nvme9", 00:22:53.340 "trtype": "tcp", 00:22:53.340 "traddr": "10.0.0.2", 00:22:53.340 "adrfam": "ipv4", 00:22:53.340 "trsvcid": "4420", 00:22:53.340 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:53.340 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:53.340 "hdgst": false, 00:22:53.340 "ddgst": false 00:22:53.340 }, 00:22:53.340 "method": "bdev_nvme_attach_controller" 00:22:53.340 },{ 00:22:53.340 "params": { 00:22:53.340 "name": "Nvme10", 00:22:53.340 "trtype": "tcp", 00:22:53.340 "traddr": "10.0.0.2", 00:22:53.340 "adrfam": "ipv4", 00:22:53.340 "trsvcid": "4420", 00:22:53.340 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:53.340 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:53.340 "hdgst": false, 00:22:53.340 "ddgst": false 00:22:53.340 }, 00:22:53.340 "method": "bdev_nvme_attach_controller" 00:22:53.340 }' 00:22:53.340 [2024-12-06 11:21:59.396023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.340 [2024-12-06 11:21:59.432244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.726 Running I/O for 10 seconds... 00:22:54.726 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.726 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:54.726 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:54.726 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.726 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:54.987 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:54.988 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:55.248 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:55.248 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:55.248 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:55.248 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:55.248 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.248 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.248 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.248 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:55.248 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:55.248 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:55.508 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:55.508 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:55.508 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:55.508 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:55.508 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.508 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.508 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=147 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 147 -ge 100 ']' 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3496889 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3496889 ']' 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3496889 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3496889 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3496889' 00:22:55.777 killing process with pid 3496889 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3496889 00:22:55.777 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3496889 00:22:55.777 [2024-12-06 11:22:01.758129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.777 [2024-12-06 11:22:01.758408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.758412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.758418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.758423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.758427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.758432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.758436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.758441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.758446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.758450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.758455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2250 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.758964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.778 [2024-12-06 11:22:01.759001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.778 [2024-12-06 11:22:01.759014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.778 [2024-12-06 11:22:01.759022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.778 [2024-12-06 11:22:01.759030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.778 [2024-12-06 11:22:01.759038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.778 [2024-12-06 11:22:01.759046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.778 [2024-12-06 11:22:01.759053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.778 [2024-12-06 11:22:01.759061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132b10 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.759584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4e20 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.778 [2024-12-06 11:22:01.764281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.764430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2740 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.770978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c10 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.771962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.771985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.771991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.771996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.779 [2024-12-06 11:22:01.772084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3100 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.772996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.780 [2024-12-06 11:22:01.773107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.773111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.773116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.773121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.773126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.773131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f35d0 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.773788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3aa0 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.774761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3f70 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.781 [2024-12-06 11:22:01.775677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.775735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4460 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.776480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4930 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.778001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105e610 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.778127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5940 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.778231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1565a00 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.778323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15674f0 is same with the state(6) to be set 00:22:55.782 [2024-12-06 11:22:01.778408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.782 [2024-12-06 11:22:01.778425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.782 [2024-12-06 11:22:01.778432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1130960 is same with the state(6) to be set 00:22:55.783 [2024-12-06 11:22:01.778489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1132b10 (9): Bad file descriptor 00:22:55.783 [2024-12-06 11:22:01.778517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5fc0 is same with the state(6) to be set 00:22:55.783 [2024-12-06 11:22:01.778622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f430 is same with the state(6) to be set 00:22:55.783 [2024-12-06 11:22:01.778706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570b20 is same with the state(6) to be set 00:22:55.783 [2024-12-06 11:22:01.778798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:55.783 [2024-12-06 11:22:01.778856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a62f0 is same with the state(6) to be set 00:22:55.783 [2024-12-06 11:22:01.778914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.778925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.778948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.778966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.778983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.778993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.783 [2024-12-06 11:22:01.779456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.783 [2024-12-06 11:22:01.779466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.779984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.779994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134c880 is same with the state(6) to be set 00:22:55.784 [2024-12-06 11:22:01.780337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.784 [2024-12-06 11:22:01.780683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.784 [2024-12-06 11:22:01.780690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.780931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.780939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.791989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.791997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.792007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.792014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.792024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.792034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.792044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.792052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.792063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.792071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.792081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.792088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.792098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.792106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.792115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.792123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.792133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.792140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.792150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.785 [2024-12-06 11:22:01.792157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.785 [2024-12-06 11:22:01.792443] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.785 [2024-12-06 11:22:01.792530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105e610 (9): Bad file descriptor 00:22:55.785 [2024-12-06 11:22:01.792561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b5940 (9): Bad file descriptor 00:22:55.785 [2024-12-06 11:22:01.792575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1565a00 (9): Bad file descriptor 00:22:55.785 [2024-12-06 11:22:01.792593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15674f0 (9): Bad file descriptor 00:22:55.785 [2024-12-06 11:22:01.792611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1130960 (9): Bad file descriptor 00:22:55.785 [2024-12-06 11:22:01.792628] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:55.785 [2024-12-06 11:22:01.792644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b5fc0 (9): Bad file descriptor 00:22:55.785 [2024-12-06 11:22:01.792665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113f430 (9): Bad file descriptor 00:22:55.785 [2024-12-06 11:22:01.792686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570b20 (9): Bad file descriptor 00:22:55.785 [2024-12-06 11:22:01.792700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a62f0 (9): Bad file descriptor 00:22:55.785 [2024-12-06 11:22:01.795564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:55.785 [2024-12-06 11:22:01.796044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:55.785 [2024-12-06 11:22:01.796498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.785 [2024-12-06 11:22:01.796520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1132b10 with addr=10.0.0.2, port=4420 00:22:55.785 [2024-12-06 11:22:01.796529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132b10 is same with the state(6) to be set 00:22:55.785 [2024-12-06 11:22:01.797331] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.785 [2024-12-06 11:22:01.797382] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.785 [2024-12-06 11:22:01.797419] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.785 [2024-12-06 11:22:01.797787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.785 [2024-12-06 11:22:01.797802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570b20 with addr=10.0.0.2, port=4420 00:22:55.785 [2024-12-06 11:22:01.797810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570b20 is same with the state(6) to be set 00:22:55.785 [2024-12-06 11:22:01.797822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1132b10 (9): Bad file descriptor 00:22:55.785 [2024-12-06 11:22:01.797886] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.785 [2024-12-06 11:22:01.797926] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.785 [2024-12-06 11:22:01.797999] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:55.786 [2024-12-06 11:22:01.798044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.798986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.798996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.799003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.786 [2024-12-06 11:22:01.799013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.786 [2024-12-06 11:22:01.799020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154e560 is same with the state(6) to be set 00:22:55.787 [2024-12-06 11:22:01.799256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570b20 (9): Bad file descriptor 00:22:55.787 [2024-12-06 11:22:01.799269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:55.787 [2024-12-06 11:22:01.799276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:55.787 [2024-12-06 11:22:01.799286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:55.787 [2024-12-06 11:22:01.799295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:55.787 [2024-12-06 11:22:01.799324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.799990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.787 [2024-12-06 11:22:01.799998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.787 [2024-12-06 11:22:01.800007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.800325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.800333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134d8b0 is same with the state(6) to be set 00:22:55.788 [2024-12-06 11:22:01.801642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:55.788 [2024-12-06 11:22:01.801673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:55.788 [2024-12-06 11:22:01.801682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:55.788 [2024-12-06 11:22:01.801692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:55.788 [2024-12-06 11:22:01.801701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:55.788 [2024-12-06 11:22:01.802916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.802930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.802942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.802950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.802960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.802968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.802978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.802986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.802995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.788 [2024-12-06 11:22:01.803505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.788 [2024-12-06 11:22:01.803512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.803982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.803992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.804000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.804009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.804017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.804026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.804034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.804042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1549b40 is same with the state(6) to be set 00:22:55.789 [2024-12-06 11:22:01.804116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:55.789 [2024-12-06 11:22:01.804505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.789 [2024-12-06 11:22:01.804519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b5fc0 with addr=10.0.0.2, port=4420 00:22:55.789 [2024-12-06 11:22:01.804527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5fc0 is same with the state(6) to be set 00:22:55.789 [2024-12-06 11:22:01.804560] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:55.789 [2024-12-06 11:22:01.806109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:55.789 [2024-12-06 11:22:01.806521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.789 [2024-12-06 11:22:01.806535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x113f430 with addr=10.0.0.2, port=4420 00:22:55.789 [2024-12-06 11:22:01.806550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f430 is same with the state(6) to be set 00:22:55.789 [2024-12-06 11:22:01.806560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b5fc0 (9): Bad file descriptor 00:22:55.789 [2024-12-06 11:22:01.806868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.806881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.806893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.806900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.806910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.806918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.806927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.806935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.806945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.806953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.806962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.806970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.806979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.806987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.806996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.789 [2024-12-06 11:22:01.807213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.789 [2024-12-06 11:22:01.807223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.807979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.807987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151ca20 is same with the state(6) to be set 00:22:55.790 [2024-12-06 11:22:01.809271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.809284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.809295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.809303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.809314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.809321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.809331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.790 [2024-12-06 11:22:01.809339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.790 [2024-12-06 11:22:01.809349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.809988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.809998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.791 [2024-12-06 11:22:01.810284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.791 [2024-12-06 11:22:01.810294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.810301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.810311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.810319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.810330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.810338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.810347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.810355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.810364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.810372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.810382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.810390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.810398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154ad20 is same with the state(6) to be set 00:22:55.792 [2024-12-06 11:22:01.811664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.811987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.811994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.792 [2024-12-06 11:22:01.812526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.792 [2024-12-06 11:22:01.812534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.812800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.812809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154d2a0 is same with the state(6) to be set 00:22:55.793 [2024-12-06 11:22:01.814087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.793 [2024-12-06 11:22:01.814656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.793 [2024-12-06 11:22:01.814663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.814989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.814997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.815207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.815215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154f850 is same with the state(6) to be set 00:22:55.794 [2024-12-06 11:22:01.816493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.794 [2024-12-06 11:22:01.816941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.794 [2024-12-06 11:22:01.816950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.816958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.816967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.816975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.816984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.816992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.795 [2024-12-06 11:22:01.817636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:55.795 [2024-12-06 11:22:01.817645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1388e80 is same with the state(6) to be set 00:22:55.795 [2024-12-06 11:22:01.819595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:55.795 [2024-12-06 11:22:01.819637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:55.795 [2024-12-06 11:22:01.819651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:55.795 [2024-12-06 11:22:01.820040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.795 [2024-12-06 11:22:01.820058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15674f0 with addr=10.0.0.2, port=4420 00:22:55.795 [2024-12-06 11:22:01.820068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15674f0 is same with the state(6) to be set 00:22:55.795 [2024-12-06 11:22:01.820085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113f430 (9): Bad file descriptor 00:22:55.795 [2024-12-06 11:22:01.820096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:55.795 [2024-12-06 11:22:01.820103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:55.795 [2024-12-06 11:22:01.820113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:55.795 [2024-12-06 11:22:01.820121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:55.795 [2024-12-06 11:22:01.820158] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:55.795 [2024-12-06 11:22:01.820176] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:55.795 [2024-12-06 11:22:01.820186] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:55.795 [2024-12-06 11:22:01.820197] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:55.795 [2024-12-06 11:22:01.820208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15674f0 (9): Bad file descriptor 00:22:55.795 [2024-12-06 11:22:01.820564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:55.795 task offset: 24576 on job bdev=Nvme1n1 fails 00:22:55.795 00:22:55.795 Latency(us) 00:22:55.795 [2024-12-06T10:22:01.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.795 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.795 Job: Nvme1n1 ended in about 0.96 seconds with error 00:22:55.795 Verification LBA range: start 0x0 length 0x400 00:22:55.795 Nvme1n1 : 0.96 200.88 12.55 66.96 0.00 236297.17 22063.79 228939.09 00:22:55.795 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.795 Job: Nvme2n1 ended in about 0.96 seconds with error 00:22:55.795 Verification LBA range: start 0x0 length 0x400 00:22:55.795 Nvme2n1 : 0.96 143.04 8.94 60.12 0.00 305226.12 17913.17 256901.12 00:22:55.795 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.795 Job: Nvme3n1 ended in about 0.97 seconds with error 00:22:55.795 Verification LBA range: start 0x0 length 0x400 00:22:55.795 Nvme3n1 : 0.97 197.73 12.36 65.91 0.00 230626.99 22609.92 251658.24 00:22:55.795 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.795 Job: Nvme4n1 ended in about 0.97 seconds with error 00:22:55.795 Verification LBA range: start 0x0 length 0x400 00:22:55.795 Nvme4n1 : 0.97 198.42 12.40 66.14 0.00 225000.53 19005.44 258648.75 00:22:55.795 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.795 Job: Nvme5n1 ended in about 0.97 seconds with error 00:22:55.795 Verification LBA range: start 0x0 length 0x400 00:22:55.795 Nvme5n1 : 0.97 131.49 8.22 65.75 0.00 295696.78 16165.55 251658.24 00:22:55.796 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.796 Job: Nvme6n1 ended in about 0.96 seconds with error 00:22:55.796 Verification LBA range: start 0x0 length 0x400 00:22:55.796 Nvme6n1 : 0.96 198.50 12.41 66.86 0.00 214568.69 16493.23 256901.12 00:22:55.796 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.796 Job: Nvme7n1 ended in about 0.98 seconds with error 00:22:55.796 Verification LBA range: start 0x0 length 0x400 00:22:55.796 Nvme7n1 : 0.98 196.75 12.30 65.58 0.00 212830.93 17913.17 265639.25 00:22:55.796 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.796 Job: Nvme8n1 ended in about 0.96 seconds with error 00:22:55.796 Verification LBA range: start 0x0 length 0x400 00:22:55.796 Nvme8n1 : 0.96 199.28 12.45 66.43 0.00 205093.12 5242.88 253405.87 00:22:55.796 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.796 Job: Nvme9n1 ended in about 0.98 seconds with error 00:22:55.796 Verification LBA range: start 0x0 length 0x400 00:22:55.796 Nvme9n1 : 0.98 130.85 8.18 65.42 0.00 272286.15 18786.99 251658.24 00:22:55.796 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:55.796 Job: Nvme10n1 ended in about 0.98 seconds with error 00:22:55.796 Verification LBA range: start 0x0 length 0x400 00:22:55.796 Nvme10n1 : 0.98 130.52 8.16 65.26 0.00 266972.16 19988.48 277872.64 00:22:55.796 [2024-12-06T10:22:01.963Z] =================================================================================================================== 00:22:55.796 [2024-12-06T10:22:01.963Z] Total : 1727.46 107.97 654.43 0.00 242305.91 5242.88 277872.64 00:22:55.796 [2024-12-06 11:22:01.846501] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:55.796 [2024-12-06 11:22:01.846534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:55.796 [2024-12-06 11:22:01.846547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:55.796 [2024-12-06 11:22:01.846861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.796 [2024-12-06 11:22:01.846885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1130960 with addr=10.0.0.2, port=4420 00:22:55.796 [2024-12-06 11:22:01.846895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1130960 is same with the state(6) to be set 00:22:55.796 [2024-12-06 11:22:01.847217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.796 [2024-12-06 11:22:01.847227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1565a00 with addr=10.0.0.2, port=4420 00:22:55.796 [2024-12-06 11:22:01.847234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1565a00 is same with the state(6) to be set 00:22:55.796 [2024-12-06 11:22:01.847437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.796 [2024-12-06 11:22:01.847447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x105e610 with addr=10.0.0.2, port=4420 00:22:55.796 [2024-12-06 11:22:01.847454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105e610 is same with the state(6) to be set 00:22:55.796 [2024-12-06 11:22:01.847464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:55.796 [2024-12-06 11:22:01.847471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:55.796 [2024-12-06 11:22:01.847480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:55.796 [2024-12-06 11:22:01.847489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:55.796 [2024-12-06 11:22:01.848852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:55.796 [2024-12-06 11:22:01.848872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:55.796 [2024-12-06 11:22:01.849001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.796 [2024-12-06 11:22:01.849013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b5940 with addr=10.0.0.2, port=4420 00:22:55.796 [2024-12-06 11:22:01.849020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5940 is same with the state(6) to be set 00:22:55.796 [2024-12-06 11:22:01.849293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.796 [2024-12-06 11:22:01.849304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a62f0 with addr=10.0.0.2, port=4420 00:22:55.796 [2024-12-06 11:22:01.849315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a62f0 is same with the state(6) to be set 00:22:55.796 [2024-12-06 11:22:01.849510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.796 [2024-12-06 11:22:01.849520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1132b10 with addr=10.0.0.2, port=4420 00:22:55.796 [2024-12-06 11:22:01.849528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132b10 is same with the state(6) to be set 00:22:55.796 [2024-12-06 11:22:01.849539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1130960 (9): Bad file descriptor 00:22:55.796 [2024-12-06 11:22:01.849551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1565a00 (9): Bad file descriptor 00:22:55.796 [2024-12-06 11:22:01.849561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105e610 (9): Bad file descriptor 00:22:55.796 [2024-12-06 11:22:01.849570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:55.796 [2024-12-06 11:22:01.849577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:55.796 [2024-12-06 11:22:01.849584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:55.796 [2024-12-06 11:22:01.849592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:55.796 [2024-12-06 11:22:01.849631] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:55.796 [2024-12-06 11:22:01.849643] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:55.796 [2024-12-06 11:22:01.849656] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:55.796 [2024-12-06 11:22:01.850291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.796 [2024-12-06 11:22:01.850308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1570b20 with addr=10.0.0.2, port=4420 00:22:55.796 [2024-12-06 11:22:01.850316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1570b20 is same with the state(6) to be set 00:22:55.796 [2024-12-06 11:22:01.850670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.796 [2024-12-06 11:22:01.850680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b5fc0 with addr=10.0.0.2, port=4420 00:22:55.796 [2024-12-06 11:22:01.850687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b5fc0 is same with the state(6) to be set 00:22:55.796 [2024-12-06 11:22:01.850697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b5940 (9): Bad file descriptor 00:22:55.796 [2024-12-06 11:22:01.850707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a62f0 (9): Bad file descriptor 00:22:55.796 [2024-12-06 11:22:01.850717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1132b10 (9): Bad file descriptor 00:22:55.796 [2024-12-06 11:22:01.850725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:55.796 [2024-12-06 11:22:01.850732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:55.796 [2024-12-06 11:22:01.850739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:55.796 [2024-12-06 11:22:01.850747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:55.796 [2024-12-06 11:22:01.850754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:55.796 [2024-12-06 11:22:01.850761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:55.796 [2024-12-06 11:22:01.850771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:55.796 [2024-12-06 11:22:01.850777] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:55.796 [2024-12-06 11:22:01.850784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:55.796 [2024-12-06 11:22:01.850790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:55.796 [2024-12-06 11:22:01.850797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:55.796 [2024-12-06 11:22:01.850803] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:55.796 [2024-12-06 11:22:01.851554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:55.796 [2024-12-06 11:22:01.851570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:55.796 [2024-12-06 11:22:01.851597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570b20 (9): Bad file descriptor 00:22:55.796 [2024-12-06 11:22:01.851608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b5fc0 (9): Bad file descriptor 00:22:55.796 [2024-12-06 11:22:01.851617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:55.796 [2024-12-06 11:22:01.851625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:55.796 [2024-12-06 11:22:01.851634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:55.796 [2024-12-06 11:22:01.851642] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:55.796 [2024-12-06 11:22:01.851651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:55.796 [2024-12-06 11:22:01.851658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:55.796 [2024-12-06 11:22:01.851667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:55.796 [2024-12-06 11:22:01.851674] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:55.796 [2024-12-06 11:22:01.851683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:55.796 [2024-12-06 11:22:01.851690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:55.796 [2024-12-06 11:22:01.851699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:55.796 [2024-12-06 11:22:01.851706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:55.796 [2024-12-06 11:22:01.851978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.796 [2024-12-06 11:22:01.851993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x113f430 with addr=10.0.0.2, port=4420 00:22:55.796 [2024-12-06 11:22:01.852002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f430 is same with the state(6) to be set 00:22:55.796 [2024-12-06 11:22:01.852366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.796 [2024-12-06 11:22:01.852376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15674f0 with addr=10.0.0.2, port=4420 00:22:55.796 [2024-12-06 11:22:01.852383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15674f0 is same with the state(6) to be set 00:22:55.796 [2024-12-06 11:22:01.852390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:55.796 [2024-12-06 11:22:01.852401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:55.796 [2024-12-06 11:22:01.852408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:55.796 [2024-12-06 11:22:01.852415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:55.796 [2024-12-06 11:22:01.852422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:55.796 [2024-12-06 11:22:01.852428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:55.796 [2024-12-06 11:22:01.852434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:55.796 [2024-12-06 11:22:01.852441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:55.796 [2024-12-06 11:22:01.852471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113f430 (9): Bad file descriptor 00:22:55.796 [2024-12-06 11:22:01.852481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15674f0 (9): Bad file descriptor 00:22:55.796 [2024-12-06 11:22:01.852510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:55.796 [2024-12-06 11:22:01.852518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:55.796 [2024-12-06 11:22:01.852525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:55.796 [2024-12-06 11:22:01.852531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:55.797 [2024-12-06 11:22:01.852538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:55.797 [2024-12-06 11:22:01.852544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:55.797 [2024-12-06 11:22:01.852551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:55.797 [2024-12-06 11:22:01.852557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:56.056 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3497221 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3497221 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3497221 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:56.997 rmmod nvme_tcp 00:22:56.997 rmmod nvme_fabrics 00:22:56.997 rmmod nvme_keyring 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3496889 ']' 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3496889 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3496889 ']' 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3496889 00:22:56.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3496889) - No such process 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3496889 is not found' 00:22:56.997 Process with pid 3496889 is not found 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.997 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.541 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.541 00:22:59.541 real 0m7.733s 00:22:59.541 user 0m18.715s 00:22:59.541 sys 0m1.300s 00:22:59.541 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.541 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.542 ************************************ 00:22:59.542 END TEST nvmf_shutdown_tc3 00:22:59.542 ************************************ 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.542 ************************************ 00:22:59.542 START TEST nvmf_shutdown_tc4 00:22:59.542 ************************************ 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:59.542 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:59.542 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:59.542 Found net devices under 0000:31:00.0: cvl_0_0 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:59.542 Found net devices under 0000:31:00.1: cvl_0_1 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.542 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:22:59.543 00:22:59.543 --- 10.0.0.2 ping statistics --- 00:22:59.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.543 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:22:59.543 00:22:59.543 --- 10.0.0.1 ping statistics --- 00:22:59.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.543 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3498414 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3498414 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3498414 ']' 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:59.543 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:59.803 [2024-12-06 11:22:05.708018] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:22:59.803 [2024-12-06 11:22:05.708088] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.803 [2024-12-06 11:22:05.813034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.803 [2024-12-06 11:22:05.852306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.803 [2024-12-06 11:22:05.852342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.803 [2024-12-06 11:22:05.852348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.803 [2024-12-06 11:22:05.852353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.803 [2024-12-06 11:22:05.852357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.803 [2024-12-06 11:22:05.854081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.803 [2024-12-06 11:22:05.854210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.803 [2024-12-06 11:22:05.854532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.803 [2024-12-06 11:22:05.854533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.374 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.374 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:00.374 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.374 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.374 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:00.635 [2024-12-06 11:22:06.548712] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.635 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:00.635 Malloc1 00:23:00.635 [2024-12-06 11:22:06.656623] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.635 Malloc2 00:23:00.635 Malloc3 00:23:00.635 Malloc4 00:23:00.635 Malloc5 00:23:00.896 Malloc6 00:23:00.896 Malloc7 00:23:00.896 Malloc8 00:23:00.896 Malloc9 00:23:00.896 Malloc10 00:23:00.896 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.896 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:00.896 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.896 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:00.896 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3498795 00:23:00.896 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:00.896 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:01.157 [2024-12-06 11:22:07.119820] subsystem.c:1791:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3498414 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3498414 ']' 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3498414 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3498414 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3498414' 00:23:06.459 killing process with pid 3498414 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3498414 00:23:06.459 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3498414 00:23:06.459 [2024-12-06 11:22:12.140989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe630 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe630 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe630 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe630 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe630 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeb20 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeb20 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeb20 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeb20 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeb20 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeb20 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeb20 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeff0 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeff0 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeff0 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeff0 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeff0 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeff0 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeff0 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeff0 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.141765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbeff0 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.142088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.142110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.142116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.142121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.142127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.142132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.142136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.142141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.459 [2024-12-06 11:22:12.142157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe160 is same with the state(6) to be set 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 [2024-12-06 11:22:12.142727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbd7a0 is same with the state(6) to be set 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 [2024-12-06 11:22:12.142742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbd7a0 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbd7a0 is same with the state(6) to be set 00:23:06.460 [2024-12-06 11:22:12.142752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbd7a0 is same with the state(6) to be set 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 [2024-12-06 11:22:12.143087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 [2024-12-06 11:22:12.143920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.460 Write completed with error (sct=0, sc=8) 00:23:06.460 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 [2024-12-06 11:22:12.144828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 [2024-12-06 11:22:12.145542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf9b0 is same with the state(6) to be set 00:23:06.461 starting I/O failed: -6 00:23:06.461 [2024-12-06 11:22:12.145561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf9b0 is same with the state(6) to be set 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 [2024-12-06 11:22:12.145566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf9b0 is same with the state(6) to be set 00:23:06.461 [2024-12-06 11:22:12.145572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf9b0 is same with tstarting I/O failed: -6 00:23:06.461 he state(6) to be set 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 [2024-12-06 11:22:12.145803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbfea0 is same with the state(6) to be set 00:23:06.461 [2024-12-06 11:22:12.145816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbfea0 is same with the state(6) to be set 00:23:06.461 [2024-12-06 11:22:12.145821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbfea0 is same with the state(6) to be set 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 [2024-12-06 11:22:12.145977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0370 is same with tstarting I/O failed: -6 00:23:06.461 he state(6) to be set 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 [2024-12-06 11:22:12.145994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0370 is same with the state(6) to be set 00:23:06.461 [2024-12-06 11:22:12.145999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0370 is same with the state(6) to be set 00:23:06.461 starting I/O failed: -6 00:23:06.461 [2024-12-06 11:22:12.146004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0370 is same with the state(6) to be set 00:23:06.461 [2024-12-06 11:22:12.146009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0370 is same with the state(6) to be set 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 [2024-12-06 11:22:12.146014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0370 is same with the state(6) to be set 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 Write completed with error (sct=0, sc=8) 00:23:06.461 starting I/O failed: -6 00:23:06.461 [2024-12-06 11:22:12.146096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.461 NVMe io qpair process completion error 00:23:06.461 [2024-12-06 11:22:12.146199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.461 [2024-12-06 11:22:12.146212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.461 [2024-12-06 11:22:12.146217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.461 [2024-12-06 11:22:12.146222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.461 [2024-12-06 11:22:12.146231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.461 [2024-12-06 11:22:12.146236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 [2024-12-06 11:22:12.146327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf4e0 is same with the state(6) to be set 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 [2024-12-06 11:22:12.147142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 [2024-12-06 11:22:12.147938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.462 starting I/O failed: -6 00:23:06.462 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 [2024-12-06 11:22:12.148834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 [2024-12-06 11:22:12.150421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.463 NVMe io qpair process completion error 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 Write completed with error (sct=0, sc=8) 00:23:06.463 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 [2024-12-06 11:22:12.151573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 [2024-12-06 11:22:12.152375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.464 [2024-12-06 11:22:12.153298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.464 Write completed with error (sct=0, sc=8) 00:23:06.464 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 [2024-12-06 11:22:12.156149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.465 NVMe io qpair process completion error 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 [2024-12-06 11:22:12.157285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 starting I/O failed: -6 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.465 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 [2024-12-06 11:22:12.158112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 [2024-12-06 11:22:12.159045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.466 Write completed with error (sct=0, sc=8) 00:23:06.466 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 [2024-12-06 11:22:12.161601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.467 NVMe io qpair process completion error 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 [2024-12-06 11:22:12.162627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 [2024-12-06 11:22:12.163481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.467 starting I/O failed: -6 00:23:06.467 starting I/O failed: -6 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 Write completed with error (sct=0, sc=8) 00:23:06.467 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 [2024-12-06 11:22:12.164624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 [2024-12-06 11:22:12.166095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.468 NVMe io qpair process completion error 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 [2024-12-06 11:22:12.167162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 starting I/O failed: -6 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.468 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 [2024-12-06 11:22:12.167976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 [2024-12-06 11:22:12.168886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.469 starting I/O failed: -6 00:23:06.469 starting I/O failed: -6 00:23:06.469 starting I/O failed: -6 00:23:06.469 starting I/O failed: -6 00:23:06.469 starting I/O failed: -6 00:23:06.469 starting I/O failed: -6 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.469 starting I/O failed: -6 00:23:06.469 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 [2024-12-06 11:22:12.171523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.470 NVMe io qpair process completion error 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 [2024-12-06 11:22:12.172576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 [2024-12-06 11:22:12.173480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.470 Write completed with error (sct=0, sc=8) 00:23:06.470 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 [2024-12-06 11:22:12.174378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 [2024-12-06 11:22:12.175995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.471 NVMe io qpair process completion error 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 [2024-12-06 11:22:12.177482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.471 starting I/O failed: -6 00:23:06.471 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 [2024-12-06 11:22:12.178303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.472 starting I/O failed: -6 00:23:06.472 starting I/O failed: -6 00:23:06.472 starting I/O failed: -6 00:23:06.472 starting I/O failed: -6 00:23:06.472 starting I/O failed: -6 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 [2024-12-06 11:22:12.179682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.472 Write completed with error (sct=0, sc=8) 00:23:06.472 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 [2024-12-06 11:22:12.182322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.473 NVMe io qpair process completion error 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 [2024-12-06 11:22:12.183630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 [2024-12-06 11:22:12.184458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.473 starting I/O failed: -6 00:23:06.473 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 [2024-12-06 11:22:12.185409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 [2024-12-06 11:22:12.187087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.474 NVMe io qpair process completion error 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 starting I/O failed: -6 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.474 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 [2024-12-06 11:22:12.188175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.475 starting I/O failed: -6 00:23:06.475 starting I/O failed: -6 00:23:06.475 starting I/O failed: -6 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 [2024-12-06 11:22:12.189986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.475 Write completed with error (sct=0, sc=8) 00:23:06.475 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 Write completed with error (sct=0, sc=8) 00:23:06.476 starting I/O failed: -6 00:23:06.476 [2024-12-06 11:22:12.192728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.476 NVMe io qpair process completion error 00:23:06.476 Initializing NVMe Controllers 00:23:06.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:06.476 Controller IO queue size 128, less than required. 00:23:06.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:06.476 Controller IO queue size 128, less than required. 00:23:06.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:06.476 Controller IO queue size 128, less than required. 00:23:06.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:06.476 Controller IO queue size 128, less than required. 00:23:06.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:06.476 Controller IO queue size 128, less than required. 00:23:06.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:06.476 Controller IO queue size 128, less than required. 00:23:06.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.476 Controller IO queue size 128, less than required. 00:23:06.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:06.476 Controller IO queue size 128, less than required. 00:23:06.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:06.476 Controller IO queue size 128, less than required. 00:23:06.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:06.476 Controller IO queue size 128, less than required. 00:23:06.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:06.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:06.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:06.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:06.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:06.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:06.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:06.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:06.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:06.476 Initialization complete. Launching workers. 00:23:06.476 ======================================================== 00:23:06.476 Latency(us) 00:23:06.476 Device Information : IOPS MiB/s Average min max 00:23:06.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1854.04 79.67 69054.79 741.13 117346.02 00:23:06.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1897.06 81.51 67506.82 813.67 148920.55 00:23:06.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1887.04 81.08 67896.62 800.24 127672.40 00:23:06.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1897.48 81.53 67554.33 687.99 121569.95 00:23:06.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1882.65 80.90 68124.24 618.94 120111.19 00:23:06.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1909.38 82.04 67193.15 519.62 134325.84 00:23:06.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1917.31 82.38 66241.50 850.03 117999.13 00:23:06.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1919.19 82.47 66190.08 796.33 118371.99 00:23:06.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1885.78 81.03 67381.74 817.32 118514.27 00:23:06.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1915.85 82.32 66361.58 722.06 119884.07 00:23:06.476 ======================================================== 00:23:06.476 Total : 18965.78 814.94 67342.11 519.62 148920.55 00:23:06.476 00:23:06.476 [2024-12-06 11:22:12.197847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22399f0 is same with the state(6) to be set 00:23:06.476 [2024-12-06 11:22:12.197896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a9e0 is same with the state(6) to be set 00:23:06.476 [2024-12-06 11:22:12.197928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a6b0 is same with the state(6) to be set 00:23:06.476 [2024-12-06 11:22:12.197958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22396c0 is same with the state(6) to be set 00:23:06.476 [2024-12-06 11:22:12.197986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239390 is same with the state(6) to be set 00:23:06.476 [2024-12-06 11:22:12.198027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a380 is same with the state(6) to be set 00:23:06.476 [2024-12-06 11:22:12.198058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b360 is same with the state(6) to be set 00:23:06.476 [2024-12-06 11:22:12.198087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223b540 is same with the state(6) to be set 00:23:06.476 [2024-12-06 11:22:12.198116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2239060 is same with the state(6) to be set 00:23:06.476 [2024-12-06 11:22:12.198145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a050 is same with the state(6) to be set 00:23:06.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:06.476 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3498795 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3498795 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3498795 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:07.422 rmmod nvme_tcp 00:23:07.422 rmmod nvme_fabrics 00:23:07.422 rmmod nvme_keyring 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3498414 ']' 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3498414 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3498414 ']' 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3498414 00:23:07.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3498414) - No such process 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3498414 is not found' 00:23:07.422 Process with pid 3498414 is not found 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.422 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.967 00:23:09.967 real 0m10.280s 00:23:09.967 user 0m27.805s 00:23:09.967 sys 0m4.073s 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:09.967 ************************************ 00:23:09.967 END TEST nvmf_shutdown_tc4 00:23:09.967 ************************************ 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:09.967 00:23:09.967 real 0m44.255s 00:23:09.967 user 1m44.993s 00:23:09.967 sys 0m14.528s 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:09.967 ************************************ 00:23:09.967 END TEST nvmf_shutdown 00:23:09.967 ************************************ 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:09.967 ************************************ 00:23:09.967 START TEST nvmf_nsid 00:23:09.967 ************************************ 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:09.967 * Looking for test storage... 00:23:09.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.967 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.968 --rc genhtml_branch_coverage=1 00:23:09.968 --rc genhtml_function_coverage=1 00:23:09.968 --rc genhtml_legend=1 00:23:09.968 --rc geninfo_all_blocks=1 00:23:09.968 --rc geninfo_unexecuted_blocks=1 00:23:09.968 00:23:09.968 ' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.968 --rc genhtml_branch_coverage=1 00:23:09.968 --rc genhtml_function_coverage=1 00:23:09.968 --rc genhtml_legend=1 00:23:09.968 --rc geninfo_all_blocks=1 00:23:09.968 --rc geninfo_unexecuted_blocks=1 00:23:09.968 00:23:09.968 ' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.968 --rc genhtml_branch_coverage=1 00:23:09.968 --rc genhtml_function_coverage=1 00:23:09.968 --rc genhtml_legend=1 00:23:09.968 --rc geninfo_all_blocks=1 00:23:09.968 --rc geninfo_unexecuted_blocks=1 00:23:09.968 00:23:09.968 ' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:09.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.968 --rc genhtml_branch_coverage=1 00:23:09.968 --rc genhtml_function_coverage=1 00:23:09.968 --rc genhtml_legend=1 00:23:09.968 --rc geninfo_all_blocks=1 00:23:09.968 --rc geninfo_unexecuted_blocks=1 00:23:09.968 00:23:09.968 ' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:09.968 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.969 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:18.110 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:18.110 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:18.110 Found net devices under 0000:31:00.0: cvl_0_0 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:18.110 Found net devices under 0000:31:00.1: cvl_0_1 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.110 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.110 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.110 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.110 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:18.110 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.110 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.110 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:18.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:23:18.111 00:23:18.111 --- 10.0.0.2 ping statistics --- 00:23:18.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.111 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:23:18.111 00:23:18.111 --- 10.0.0.1 ping statistics --- 00:23:18.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.111 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3504828 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3504828 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3504828 ']' 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.111 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:18.371 [2024-12-06 11:22:24.276352] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:23:18.371 [2024-12-06 11:22:24.276403] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.371 [2024-12-06 11:22:24.365389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.371 [2024-12-06 11:22:24.399779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.371 [2024-12-06 11:22:24.399810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.371 [2024-12-06 11:22:24.399818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.371 [2024-12-06 11:22:24.399825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.371 [2024-12-06 11:22:24.399830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.371 [2024-12-06 11:22:24.400418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.940 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.940 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:18.940 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:18.940 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.940 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:18.940 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.940 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:18.940 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3504864 00:23:18.940 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:18.940 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=694990bc-1180-4891-8af5-24976f59eeeb 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=55bff1ce-9d0a-4041-ad2a-cd4eae1c4cc2 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=42337ec6-5009-44d6-b98e-92c3adc40585 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:19.201 null0 00:23:19.201 null1 00:23:19.201 null2 00:23:19.201 [2024-12-06 11:22:25.164903] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:23:19.201 [2024-12-06 11:22:25.164956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3504864 ] 00:23:19.201 [2024-12-06 11:22:25.165671] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.201 [2024-12-06 11:22:25.189886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3504864 /var/tmp/tgt2.sock 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3504864 ']' 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:19.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.201 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:19.201 [2024-12-06 11:22:25.257508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.201 [2024-12-06 11:22:25.293716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.461 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.461 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:19.461 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:19.774 [2024-12-06 11:22:25.783762] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.774 [2024-12-06 11:22:25.799898] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:19.774 nvme0n1 nvme0n2 00:23:19.774 nvme1n1 00:23:19.774 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:19.774 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:19.774 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:21.160 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 694990bc-1180-4891-8af5-24976f59eeeb 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=694990bc118048918af524976f59eeeb 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 694990BC118048918AF524976F59EEEB 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 694990BC118048918AF524976F59EEEB == \6\9\4\9\9\0\B\C\1\1\8\0\4\8\9\1\8\A\F\5\2\4\9\7\6\F\5\9\E\E\E\B ]] 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 55bff1ce-9d0a-4041-ad2a-cd4eae1c4cc2 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:22.545 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=55bff1ce9d0a4041ad2acd4eae1c4cc2 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 55BFF1CE9D0A4041AD2ACD4EAE1C4CC2 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 55BFF1CE9D0A4041AD2ACD4EAE1C4CC2 == \5\5\B\F\F\1\C\E\9\D\0\A\4\0\4\1\A\D\2\A\C\D\4\E\A\E\1\C\4\C\C\2 ]] 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 42337ec6-5009-44d6-b98e-92c3adc40585 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=42337ec6500944d6b98e92c3adc40585 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 42337EC6500944D6B98E92C3ADC40585 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 42337EC6500944D6B98E92C3ADC40585 == \4\2\3\3\7\E\C\6\5\0\0\9\4\4\D\6\B\9\8\E\9\2\C\3\A\D\C\4\0\5\8\5 ]] 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3504864 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3504864 ']' 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3504864 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:22.546 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.807 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3504864 00:23:22.807 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:22.807 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:22.807 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3504864' 00:23:22.807 killing process with pid 3504864 00:23:22.807 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3504864 00:23:22.807 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3504864 00:23:22.807 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:22.807 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:22.807 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:23.069 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.069 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:23.069 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.069 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.069 rmmod nvme_tcp 00:23:23.069 rmmod nvme_fabrics 00:23:23.069 rmmod nvme_keyring 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3504828 ']' 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3504828 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3504828 ']' 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3504828 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3504828 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:23.069 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3504828' 00:23:23.070 killing process with pid 3504828 00:23:23.070 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3504828 00:23:23.070 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3504828 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.332 11:22:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.245 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:25.245 00:23:25.245 real 0m15.647s 00:23:25.245 user 0m11.420s 00:23:25.245 sys 0m7.416s 00:23:25.245 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.245 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:25.245 ************************************ 00:23:25.245 END TEST nvmf_nsid 00:23:25.245 ************************************ 00:23:25.245 11:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:25.245 00:23:25.245 real 13m30.871s 00:23:25.245 user 27m36.830s 00:23:25.245 sys 4m12.257s 00:23:25.245 11:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.245 11:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:25.245 ************************************ 00:23:25.245 END TEST nvmf_target_extra 00:23:25.245 ************************************ 00:23:25.245 11:22:31 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:25.245 11:22:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:25.245 11:22:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.245 11:22:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.508 ************************************ 00:23:25.508 START TEST nvmf_host 00:23:25.508 ************************************ 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:25.508 * Looking for test storage... 00:23:25.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:25.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.508 --rc genhtml_branch_coverage=1 00:23:25.508 --rc genhtml_function_coverage=1 00:23:25.508 --rc genhtml_legend=1 00:23:25.508 --rc geninfo_all_blocks=1 00:23:25.508 --rc geninfo_unexecuted_blocks=1 00:23:25.508 00:23:25.508 ' 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:25.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.508 --rc genhtml_branch_coverage=1 00:23:25.508 --rc genhtml_function_coverage=1 00:23:25.508 --rc genhtml_legend=1 00:23:25.508 --rc geninfo_all_blocks=1 00:23:25.508 --rc geninfo_unexecuted_blocks=1 00:23:25.508 00:23:25.508 ' 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:25.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.508 --rc genhtml_branch_coverage=1 00:23:25.508 --rc genhtml_function_coverage=1 00:23:25.508 --rc genhtml_legend=1 00:23:25.508 --rc geninfo_all_blocks=1 00:23:25.508 --rc geninfo_unexecuted_blocks=1 00:23:25.508 00:23:25.508 ' 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:25.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.508 --rc genhtml_branch_coverage=1 00:23:25.508 --rc genhtml_function_coverage=1 00:23:25.508 --rc genhtml_legend=1 00:23:25.508 --rc geninfo_all_blocks=1 00:23:25.508 --rc geninfo_unexecuted_blocks=1 00:23:25.508 00:23:25.508 ' 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:25.508 11:22:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:25.509 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:25.509 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.509 11:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.772 ************************************ 00:23:25.772 START TEST nvmf_multicontroller 00:23:25.772 ************************************ 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:25.772 * Looking for test storage... 00:23:25.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:25.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.772 --rc genhtml_branch_coverage=1 00:23:25.772 --rc genhtml_function_coverage=1 00:23:25.772 --rc genhtml_legend=1 00:23:25.772 --rc geninfo_all_blocks=1 00:23:25.772 --rc geninfo_unexecuted_blocks=1 00:23:25.772 00:23:25.772 ' 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:25.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.772 --rc genhtml_branch_coverage=1 00:23:25.772 --rc genhtml_function_coverage=1 00:23:25.772 --rc genhtml_legend=1 00:23:25.772 --rc geninfo_all_blocks=1 00:23:25.772 --rc geninfo_unexecuted_blocks=1 00:23:25.772 00:23:25.772 ' 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:25.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.772 --rc genhtml_branch_coverage=1 00:23:25.772 --rc genhtml_function_coverage=1 00:23:25.772 --rc genhtml_legend=1 00:23:25.772 --rc geninfo_all_blocks=1 00:23:25.772 --rc geninfo_unexecuted_blocks=1 00:23:25.772 00:23:25.772 ' 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:25.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.772 --rc genhtml_branch_coverage=1 00:23:25.772 --rc genhtml_function_coverage=1 00:23:25.772 --rc genhtml_legend=1 00:23:25.772 --rc geninfo_all_blocks=1 00:23:25.772 --rc geninfo_unexecuted_blocks=1 00:23:25.772 00:23:25.772 ' 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.772 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.773 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.034 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:26.034 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:26.034 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.034 11:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:34.181 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:34.181 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:34.181 Found net devices under 0000:31:00.0: cvl_0_0 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:34.181 Found net devices under 0000:31:00.1: cvl_0_1 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.181 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.181 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.181 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.181 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:34.181 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.181 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.181 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.181 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:34.181 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:34.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:23:34.181 00:23:34.181 --- 10.0.0.2 ping statistics --- 00:23:34.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.181 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:23:34.181 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:23:34.181 00:23:34.181 --- 10.0.0.1 ping statistics --- 00:23:34.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.182 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3510634 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3510634 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3510634 ']' 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.182 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.443 [2024-12-06 11:22:40.396418] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:23:34.443 [2024-12-06 11:22:40.396476] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.443 [2024-12-06 11:22:40.504879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:34.443 [2024-12-06 11:22:40.557636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.443 [2024-12-06 11:22:40.557688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.443 [2024-12-06 11:22:40.557697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.443 [2024-12-06 11:22:40.557705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.443 [2024-12-06 11:22:40.557711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.443 [2024-12-06 11:22:40.559562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.443 [2024-12-06 11:22:40.559730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.443 [2024-12-06 11:22:40.559731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 [2024-12-06 11:22:41.249804] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 Malloc0 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 [2024-12-06 11:22:41.316990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 [2024-12-06 11:22:41.328923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 Malloc1 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3510691 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3510691 /var/tmp/bdevperf.sock 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3510691 ']' 00:23:35.388 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.389 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.389 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.389 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.389 11:22:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.357 NVMe0n1 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.357 1 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.357 request: 00:23:36.357 { 00:23:36.357 "name": "NVMe0", 00:23:36.357 "trtype": "tcp", 00:23:36.357 "traddr": "10.0.0.2", 00:23:36.357 "adrfam": "ipv4", 00:23:36.357 "trsvcid": "4420", 00:23:36.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.357 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:36.357 "hostaddr": "10.0.0.1", 00:23:36.357 "prchk_reftag": false, 00:23:36.357 "prchk_guard": false, 00:23:36.357 "hdgst": false, 00:23:36.357 "ddgst": false, 00:23:36.357 "allow_unrecognized_csi": false, 00:23:36.357 "method": "bdev_nvme_attach_controller", 00:23:36.357 "req_id": 1 00:23:36.357 } 00:23:36.357 Got JSON-RPC error response 00:23:36.357 response: 00:23:36.357 { 00:23:36.357 "code": -114, 00:23:36.357 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:36.357 } 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.357 request: 00:23:36.357 { 00:23:36.357 "name": "NVMe0", 00:23:36.357 "trtype": "tcp", 00:23:36.357 "traddr": "10.0.0.2", 00:23:36.357 "adrfam": "ipv4", 00:23:36.357 "trsvcid": "4420", 00:23:36.357 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:36.357 "hostaddr": "10.0.0.1", 00:23:36.357 "prchk_reftag": false, 00:23:36.357 "prchk_guard": false, 00:23:36.357 "hdgst": false, 00:23:36.357 "ddgst": false, 00:23:36.357 "allow_unrecognized_csi": false, 00:23:36.357 "method": "bdev_nvme_attach_controller", 00:23:36.357 "req_id": 1 00:23:36.357 } 00:23:36.357 Got JSON-RPC error response 00:23:36.357 response: 00:23:36.357 { 00:23:36.357 "code": -114, 00:23:36.357 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:36.357 } 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.357 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.358 request: 00:23:36.358 { 00:23:36.358 "name": "NVMe0", 00:23:36.358 "trtype": "tcp", 00:23:36.358 "traddr": "10.0.0.2", 00:23:36.358 "adrfam": "ipv4", 00:23:36.358 "trsvcid": "4420", 00:23:36.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.358 "hostaddr": "10.0.0.1", 00:23:36.358 "prchk_reftag": false, 00:23:36.358 "prchk_guard": false, 00:23:36.358 "hdgst": false, 00:23:36.358 "ddgst": false, 00:23:36.358 "multipath": "disable", 00:23:36.358 "allow_unrecognized_csi": false, 00:23:36.358 "method": "bdev_nvme_attach_controller", 00:23:36.358 "req_id": 1 00:23:36.358 } 00:23:36.358 Got JSON-RPC error response 00:23:36.358 response: 00:23:36.358 { 00:23:36.358 "code": -114, 00:23:36.358 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:36.358 } 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.358 request: 00:23:36.358 { 00:23:36.358 "name": "NVMe0", 00:23:36.358 "trtype": "tcp", 00:23:36.358 "traddr": "10.0.0.2", 00:23:36.358 "adrfam": "ipv4", 00:23:36.358 "trsvcid": "4420", 00:23:36.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.358 "hostaddr": "10.0.0.1", 00:23:36.358 "prchk_reftag": false, 00:23:36.358 "prchk_guard": false, 00:23:36.358 "hdgst": false, 00:23:36.358 "ddgst": false, 00:23:36.358 "multipath": "failover", 00:23:36.358 "allow_unrecognized_csi": false, 00:23:36.358 "method": "bdev_nvme_attach_controller", 00:23:36.358 "req_id": 1 00:23:36.358 } 00:23:36.358 Got JSON-RPC error response 00:23:36.358 response: 00:23:36.358 { 00:23:36.358 "code": -114, 00:23:36.358 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:36.358 } 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.358 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.661 NVMe0n1 00:23:36.661 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.661 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:36.661 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.661 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.661 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.661 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:36.661 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.661 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.951 00:23:36.951 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.951 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:36.951 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:36.951 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.951 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.951 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.951 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:36.951 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:37.893 { 00:23:37.893 "results": [ 00:23:37.893 { 00:23:37.893 "job": "NVMe0n1", 00:23:37.893 "core_mask": "0x1", 00:23:37.893 "workload": "write", 00:23:37.893 "status": "finished", 00:23:37.893 "queue_depth": 128, 00:23:37.893 "io_size": 4096, 00:23:37.893 "runtime": 1.005992, 00:23:37.893 "iops": 26327.247135166086, 00:23:37.893 "mibps": 102.84080912174252, 00:23:37.893 "io_failed": 0, 00:23:37.893 "io_timeout": 0, 00:23:37.893 "avg_latency_us": 4848.307430369391, 00:23:37.893 "min_latency_us": 1966.08, 00:23:37.893 "max_latency_us": 8137.386666666666 00:23:37.893 } 00:23:37.893 ], 00:23:37.893 "core_count": 1 00:23:37.893 } 00:23:37.893 11:22:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:37.893 11:22:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.893 11:22:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.893 11:22:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.893 11:22:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:37.893 11:22:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3510691 00:23:37.893 11:22:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3510691 ']' 00:23:37.893 11:22:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3510691 00:23:37.893 11:22:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:37.893 11:22:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.893 11:22:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3510691 00:23:37.893 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.893 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.893 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3510691' 00:23:37.893 killing process with pid 3510691 00:23:37.893 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3510691 00:23:37.893 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3510691 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:38.154 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:38.154 [2024-12-06 11:22:41.453354] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:23:38.154 [2024-12-06 11:22:41.453445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510691 ] 00:23:38.154 [2024-12-06 11:22:41.535486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.154 [2024-12-06 11:22:41.572037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.154 [2024-12-06 11:22:42.816655] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name b3d1514b-bd33-4c0e-8908-bd2782f015f0 already exists 00:23:38.154 [2024-12-06 11:22:42.816685] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:b3d1514b-bd33-4c0e-8908-bd2782f015f0 alias for bdev NVMe1n1 00:23:38.154 [2024-12-06 11:22:42.816695] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:38.154 Running I/O for 1 seconds... 00:23:38.154 26309.00 IOPS, 102.77 MiB/s 00:23:38.154 Latency(us) 00:23:38.154 [2024-12-06T10:22:44.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.154 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:38.154 NVMe0n1 : 1.01 26327.25 102.84 0.00 0.00 4848.31 1966.08 8137.39 00:23:38.154 [2024-12-06T10:22:44.321Z] =================================================================================================================== 00:23:38.154 [2024-12-06T10:22:44.321Z] Total : 26327.25 102.84 0.00 0.00 4848.31 1966.08 8137.39 00:23:38.154 Received shutdown signal, test time was about 1.000000 seconds 00:23:38.154 00:23:38.154 Latency(us) 00:23:38.154 [2024-12-06T10:22:44.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.154 [2024-12-06T10:22:44.321Z] =================================================================================================================== 00:23:38.154 [2024-12-06T10:22:44.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.154 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.154 rmmod nvme_tcp 00:23:38.154 rmmod nvme_fabrics 00:23:38.154 rmmod nvme_keyring 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3510634 ']' 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3510634 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3510634 ']' 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3510634 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.154 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3510634 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3510634' 00:23:38.414 killing process with pid 3510634 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3510634 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3510634 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.414 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:40.957 00:23:40.957 real 0m14.866s 00:23:40.957 user 0m17.166s 00:23:40.957 sys 0m7.217s 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.957 ************************************ 00:23:40.957 END TEST nvmf_multicontroller 00:23:40.957 ************************************ 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.957 ************************************ 00:23:40.957 START TEST nvmf_aer 00:23:40.957 ************************************ 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:40.957 * Looking for test storage... 00:23:40.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:40.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.957 --rc genhtml_branch_coverage=1 00:23:40.957 --rc genhtml_function_coverage=1 00:23:40.957 --rc genhtml_legend=1 00:23:40.957 --rc geninfo_all_blocks=1 00:23:40.957 --rc geninfo_unexecuted_blocks=1 00:23:40.957 00:23:40.957 ' 00:23:40.957 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:40.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.957 --rc genhtml_branch_coverage=1 00:23:40.957 --rc genhtml_function_coverage=1 00:23:40.957 --rc genhtml_legend=1 00:23:40.957 --rc geninfo_all_blocks=1 00:23:40.957 --rc geninfo_unexecuted_blocks=1 00:23:40.957 00:23:40.957 ' 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:40.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.958 --rc genhtml_branch_coverage=1 00:23:40.958 --rc genhtml_function_coverage=1 00:23:40.958 --rc genhtml_legend=1 00:23:40.958 --rc geninfo_all_blocks=1 00:23:40.958 --rc geninfo_unexecuted_blocks=1 00:23:40.958 00:23:40.958 ' 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:40.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.958 --rc genhtml_branch_coverage=1 00:23:40.958 --rc genhtml_function_coverage=1 00:23:40.958 --rc genhtml_legend=1 00:23:40.958 --rc geninfo_all_blocks=1 00:23:40.958 --rc geninfo_unexecuted_blocks=1 00:23:40.958 00:23:40.958 ' 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:40.958 11:22:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.109 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:49.110 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:49.110 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:49.110 Found net devices under 0000:31:00.0: cvl_0_0 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:49.110 Found net devices under 0000:31:00.1: cvl_0_1 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.110 11:22:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:49.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:23:49.110 00:23:49.110 --- 10.0.0.2 ping statistics --- 00:23:49.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.110 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:23:49.110 00:23:49.110 --- 10.0.0.1 ping statistics --- 00:23:49.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.110 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3516040 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3516040 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3516040 ']' 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.110 11:22:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.375 [2024-12-06 11:22:55.320815] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:23:49.375 [2024-12-06 11:22:55.320888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.375 [2024-12-06 11:22:55.415599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.375 [2024-12-06 11:22:55.456819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.375 [2024-12-06 11:22:55.456856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.375 [2024-12-06 11:22:55.456870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.375 [2024-12-06 11:22:55.456878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.375 [2024-12-06 11:22:55.456884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.375 [2024-12-06 11:22:55.458531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.375 [2024-12-06 11:22:55.458647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.375 [2024-12-06 11:22:55.458811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.375 [2024-12-06 11:22:55.458811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.318 [2024-12-06 11:22:56.178599] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.318 Malloc0 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.318 [2024-12-06 11:22:56.248172] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.318 [ 00:23:50.318 { 00:23:50.318 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:50.318 "subtype": "Discovery", 00:23:50.318 "listen_addresses": [], 00:23:50.318 "allow_any_host": true, 00:23:50.318 "hosts": [] 00:23:50.318 }, 00:23:50.318 { 00:23:50.318 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.318 "subtype": "NVMe", 00:23:50.318 "listen_addresses": [ 00:23:50.318 { 00:23:50.318 "trtype": "TCP", 00:23:50.318 "adrfam": "IPv4", 00:23:50.318 "traddr": "10.0.0.2", 00:23:50.318 "trsvcid": "4420" 00:23:50.318 } 00:23:50.318 ], 00:23:50.318 "allow_any_host": true, 00:23:50.318 "hosts": [], 00:23:50.318 "serial_number": "SPDK00000000000001", 00:23:50.318 "model_number": "SPDK bdev Controller", 00:23:50.318 "max_namespaces": 2, 00:23:50.318 "min_cntlid": 1, 00:23:50.318 "max_cntlid": 65519, 00:23:50.318 "namespaces": [ 00:23:50.318 { 00:23:50.318 "nsid": 1, 00:23:50.318 "bdev_name": "Malloc0", 00:23:50.318 "name": "Malloc0", 00:23:50.318 "nguid": "DE058FFF03FC4B75869DF6AC60757E51", 00:23:50.318 "uuid": "de058fff-03fc-4b75-869d-f6ac60757e51" 00:23:50.318 } 00:23:50.318 ] 00:23:50.318 } 00:23:50.318 ] 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3516311 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:50.318 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.580 Malloc1 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.580 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.580 Asynchronous Event Request test 00:23:50.580 Attaching to 10.0.0.2 00:23:50.580 Attached to 10.0.0.2 00:23:50.580 Registering asynchronous event callbacks... 00:23:50.580 Starting namespace attribute notice tests for all controllers... 00:23:50.580 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:50.580 aer_cb - Changed Namespace 00:23:50.580 Cleaning up... 00:23:50.580 [ 00:23:50.580 { 00:23:50.580 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:50.580 "subtype": "Discovery", 00:23:50.580 "listen_addresses": [], 00:23:50.580 "allow_any_host": true, 00:23:50.580 "hosts": [] 00:23:50.580 }, 00:23:50.580 { 00:23:50.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.580 "subtype": "NVMe", 00:23:50.580 "listen_addresses": [ 00:23:50.580 { 00:23:50.580 "trtype": "TCP", 00:23:50.580 "adrfam": "IPv4", 00:23:50.580 "traddr": "10.0.0.2", 00:23:50.580 "trsvcid": "4420" 00:23:50.580 } 00:23:50.581 ], 00:23:50.581 "allow_any_host": true, 00:23:50.581 "hosts": [], 00:23:50.581 "serial_number": "SPDK00000000000001", 00:23:50.581 "model_number": "SPDK bdev Controller", 00:23:50.581 "max_namespaces": 2, 00:23:50.581 "min_cntlid": 1, 00:23:50.581 "max_cntlid": 65519, 00:23:50.581 "namespaces": [ 00:23:50.581 { 00:23:50.581 "nsid": 1, 00:23:50.581 "bdev_name": "Malloc0", 00:23:50.581 "name": "Malloc0", 00:23:50.581 "nguid": "DE058FFF03FC4B75869DF6AC60757E51", 00:23:50.581 "uuid": "de058fff-03fc-4b75-869d-f6ac60757e51" 00:23:50.581 }, 00:23:50.581 { 00:23:50.581 "nsid": 2, 00:23:50.581 "bdev_name": "Malloc1", 00:23:50.581 "name": "Malloc1", 00:23:50.581 "nguid": "1E15446008754B75877F51C0F8F2AAEC", 00:23:50.581 "uuid": "1e154460-0875-4b75-877f-51c0f8f2aaec" 00:23:50.581 } 00:23:50.581 ] 00:23:50.581 } 00:23:50.581 ] 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3516311 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.581 rmmod nvme_tcp 00:23:50.581 rmmod nvme_fabrics 00:23:50.581 rmmod nvme_keyring 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3516040 ']' 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3516040 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3516040 ']' 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3516040 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3516040 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3516040' 00:23:50.581 killing process with pid 3516040 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3516040 00:23:50.581 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3516040 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.842 11:22:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.771 11:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:53.033 00:23:53.033 real 0m12.312s 00:23:53.033 user 0m8.056s 00:23:53.033 sys 0m6.837s 00:23:53.033 11:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.033 11:22:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.033 ************************************ 00:23:53.033 END TEST nvmf_aer 00:23:53.033 ************************************ 00:23:53.033 11:22:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:53.033 11:22:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:53.033 11:22:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.033 11:22:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.033 ************************************ 00:23:53.033 START TEST nvmf_async_init 00:23:53.033 ************************************ 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:53.033 * Looking for test storage... 00:23:53.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.033 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:53.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.296 --rc genhtml_branch_coverage=1 00:23:53.296 --rc genhtml_function_coverage=1 00:23:53.296 --rc genhtml_legend=1 00:23:53.296 --rc geninfo_all_blocks=1 00:23:53.296 --rc geninfo_unexecuted_blocks=1 00:23:53.296 00:23:53.296 ' 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:53.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.296 --rc genhtml_branch_coverage=1 00:23:53.296 --rc genhtml_function_coverage=1 00:23:53.296 --rc genhtml_legend=1 00:23:53.296 --rc geninfo_all_blocks=1 00:23:53.296 --rc geninfo_unexecuted_blocks=1 00:23:53.296 00:23:53.296 ' 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:53.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.296 --rc genhtml_branch_coverage=1 00:23:53.296 --rc genhtml_function_coverage=1 00:23:53.296 --rc genhtml_legend=1 00:23:53.296 --rc geninfo_all_blocks=1 00:23:53.296 --rc geninfo_unexecuted_blocks=1 00:23:53.296 00:23:53.296 ' 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:53.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.296 --rc genhtml_branch_coverage=1 00:23:53.296 --rc genhtml_function_coverage=1 00:23:53.296 --rc genhtml_legend=1 00:23:53.296 --rc geninfo_all_blocks=1 00:23:53.296 --rc geninfo_unexecuted_blocks=1 00:23:53.296 00:23:53.296 ' 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.296 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=52bbbb53a5fa413c8ae822952cee0237 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.297 11:22:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:01.450 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:01.450 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:01.450 Found net devices under 0000:31:00.0: cvl_0_0 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.450 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:01.451 Found net devices under 0000:31:00.1: cvl_0_1 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:01.451 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:01.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:24:01.712 00:24:01.712 --- 10.0.0.2 ping statistics --- 00:24:01.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.712 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:24:01.712 00:24:01.712 --- 10.0.0.1 ping statistics --- 00:24:01.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.712 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:01.712 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3521081 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3521081 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3521081 ']' 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.713 11:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.713 [2024-12-06 11:23:07.801928] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:24:01.713 [2024-12-06 11:23:07.801982] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.974 [2024-12-06 11:23:07.887686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.974 [2024-12-06 11:23:07.922409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.974 [2024-12-06 11:23:07.922441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.974 [2024-12-06 11:23:07.922448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.974 [2024-12-06 11:23:07.922455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.974 [2024-12-06 11:23:07.922460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.974 [2024-12-06 11:23:07.923063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.545 [2024-12-06 11:23:08.648255] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.545 null0 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 52bbbb53a5fa413c8ae822952cee0237 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.545 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.545 [2024-12-06 11:23:08.708538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.806 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.806 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:02.806 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.806 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 nvme0n1 00:24:02.806 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.806 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:02.806 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.806 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.806 [ 00:24:02.806 { 00:24:02.806 "name": "nvme0n1", 00:24:02.806 "aliases": [ 00:24:02.806 "52bbbb53-a5fa-413c-8ae8-22952cee0237" 00:24:02.806 ], 00:24:02.806 "product_name": "NVMe disk", 00:24:02.806 "block_size": 512, 00:24:02.806 "num_blocks": 2097152, 00:24:02.806 "uuid": "52bbbb53-a5fa-413c-8ae8-22952cee0237", 00:24:02.806 "numa_id": 0, 00:24:02.806 "assigned_rate_limits": { 00:24:02.806 "rw_ios_per_sec": 0, 00:24:02.806 "rw_mbytes_per_sec": 0, 00:24:02.806 "r_mbytes_per_sec": 0, 00:24:02.806 "w_mbytes_per_sec": 0 00:24:02.806 }, 00:24:02.806 "claimed": false, 00:24:02.806 "zoned": false, 00:24:02.806 "supported_io_types": { 00:24:02.806 "read": true, 00:24:02.806 "write": true, 00:24:02.806 "unmap": false, 00:24:02.806 "flush": true, 00:24:02.806 "reset": true, 00:24:02.806 "nvme_admin": true, 00:24:02.806 "nvme_io": true, 00:24:02.806 "nvme_io_md": false, 00:24:02.806 "write_zeroes": true, 00:24:02.806 "zcopy": false, 00:24:02.806 "get_zone_info": false, 00:24:02.806 "zone_management": false, 00:24:02.806 "zone_append": false, 00:24:02.806 "compare": true, 00:24:02.806 "compare_and_write": true, 00:24:02.806 "abort": true, 00:24:02.806 "seek_hole": false, 00:24:02.806 "seek_data": false, 00:24:02.806 "copy": true, 00:24:02.806 "nvme_iov_md": false 00:24:02.806 }, 00:24:02.806 "memory_domains": [ 00:24:02.806 { 00:24:02.806 "dma_device_id": "system", 00:24:02.806 "dma_device_type": 1 00:24:02.806 } 00:24:02.806 ], 00:24:02.806 "driver_specific": { 00:24:02.806 "nvme": [ 00:24:02.806 { 00:24:02.806 "trid": { 00:24:02.806 "trtype": "TCP", 00:24:02.806 "adrfam": "IPv4", 00:24:02.806 "traddr": "10.0.0.2", 00:24:02.806 "trsvcid": "4420", 00:24:02.806 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:02.806 }, 00:24:02.806 "ctrlr_data": { 00:24:02.806 "cntlid": 1, 00:24:02.806 "vendor_id": "0x8086", 00:24:03.068 "model_number": "SPDK bdev Controller", 00:24:03.068 "serial_number": "00000000000000000000", 00:24:03.068 "firmware_revision": "25.01", 00:24:03.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.068 "oacs": { 00:24:03.068 "security": 0, 00:24:03.068 "format": 0, 00:24:03.068 "firmware": 0, 00:24:03.068 "ns_manage": 0 00:24:03.068 }, 00:24:03.068 "multi_ctrlr": true, 00:24:03.068 "ana_reporting": false 00:24:03.068 }, 00:24:03.068 "vs": { 00:24:03.068 "nvme_version": "1.3" 00:24:03.068 }, 00:24:03.068 "ns_data": { 00:24:03.068 "id": 1, 00:24:03.068 "can_share": true 00:24:03.068 } 00:24:03.068 } 00:24:03.068 ], 00:24:03.068 "mp_policy": "active_passive" 00:24:03.068 } 00:24:03.068 } 00:24:03.068 ] 00:24:03.068 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.068 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:03.068 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.068 11:23:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.068 [2024-12-06 11:23:08.982749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.068 [2024-12-06 11:23:08.982814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1223d40 (9): Bad file descriptor 00:24:03.068 [2024-12-06 11:23:09.114958] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.068 [ 00:24:03.068 { 00:24:03.068 "name": "nvme0n1", 00:24:03.068 "aliases": [ 00:24:03.068 "52bbbb53-a5fa-413c-8ae8-22952cee0237" 00:24:03.068 ], 00:24:03.068 "product_name": "NVMe disk", 00:24:03.068 "block_size": 512, 00:24:03.068 "num_blocks": 2097152, 00:24:03.068 "uuid": "52bbbb53-a5fa-413c-8ae8-22952cee0237", 00:24:03.068 "numa_id": 0, 00:24:03.068 "assigned_rate_limits": { 00:24:03.068 "rw_ios_per_sec": 0, 00:24:03.068 "rw_mbytes_per_sec": 0, 00:24:03.068 "r_mbytes_per_sec": 0, 00:24:03.068 "w_mbytes_per_sec": 0 00:24:03.068 }, 00:24:03.068 "claimed": false, 00:24:03.068 "zoned": false, 00:24:03.068 "supported_io_types": { 00:24:03.068 "read": true, 00:24:03.068 "write": true, 00:24:03.068 "unmap": false, 00:24:03.068 "flush": true, 00:24:03.068 "reset": true, 00:24:03.068 "nvme_admin": true, 00:24:03.068 "nvme_io": true, 00:24:03.068 "nvme_io_md": false, 00:24:03.068 "write_zeroes": true, 00:24:03.068 "zcopy": false, 00:24:03.068 "get_zone_info": false, 00:24:03.068 "zone_management": false, 00:24:03.068 "zone_append": false, 00:24:03.068 "compare": true, 00:24:03.068 "compare_and_write": true, 00:24:03.068 "abort": true, 00:24:03.068 "seek_hole": false, 00:24:03.068 "seek_data": false, 00:24:03.068 "copy": true, 00:24:03.068 "nvme_iov_md": false 00:24:03.068 }, 00:24:03.068 "memory_domains": [ 00:24:03.068 { 00:24:03.068 "dma_device_id": "system", 00:24:03.068 "dma_device_type": 1 00:24:03.068 } 00:24:03.068 ], 00:24:03.068 "driver_specific": { 00:24:03.068 "nvme": [ 00:24:03.068 { 00:24:03.068 "trid": { 00:24:03.068 "trtype": "TCP", 00:24:03.068 "adrfam": "IPv4", 00:24:03.068 "traddr": "10.0.0.2", 00:24:03.068 "trsvcid": "4420", 00:24:03.068 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:03.068 }, 00:24:03.068 "ctrlr_data": { 00:24:03.068 "cntlid": 2, 00:24:03.068 "vendor_id": "0x8086", 00:24:03.068 "model_number": "SPDK bdev Controller", 00:24:03.068 "serial_number": "00000000000000000000", 00:24:03.068 "firmware_revision": "25.01", 00:24:03.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.068 "oacs": { 00:24:03.068 "security": 0, 00:24:03.068 "format": 0, 00:24:03.068 "firmware": 0, 00:24:03.068 "ns_manage": 0 00:24:03.068 }, 00:24:03.068 "multi_ctrlr": true, 00:24:03.068 "ana_reporting": false 00:24:03.068 }, 00:24:03.068 "vs": { 00:24:03.068 "nvme_version": "1.3" 00:24:03.068 }, 00:24:03.068 "ns_data": { 00:24:03.068 "id": 1, 00:24:03.068 "can_share": true 00:24:03.068 } 00:24:03.068 } 00:24:03.068 ], 00:24:03.068 "mp_policy": "active_passive" 00:24:03.068 } 00:24:03.068 } 00:24:03.068 ] 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.8YWbUfN2pm 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.8YWbUfN2pm 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.8YWbUfN2pm 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.068 [2024-12-06 11:23:09.203439] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:03.068 [2024-12-06 11:23:09.203552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.068 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.068 [2024-12-06 11:23:09.227517] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.329 nvme0n1 00:24:03.329 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.329 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:03.329 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.329 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.329 [ 00:24:03.329 { 00:24:03.329 "name": "nvme0n1", 00:24:03.329 "aliases": [ 00:24:03.329 "52bbbb53-a5fa-413c-8ae8-22952cee0237" 00:24:03.329 ], 00:24:03.329 "product_name": "NVMe disk", 00:24:03.329 "block_size": 512, 00:24:03.329 "num_blocks": 2097152, 00:24:03.330 "uuid": "52bbbb53-a5fa-413c-8ae8-22952cee0237", 00:24:03.330 "numa_id": 0, 00:24:03.330 "assigned_rate_limits": { 00:24:03.330 "rw_ios_per_sec": 0, 00:24:03.330 "rw_mbytes_per_sec": 0, 00:24:03.330 "r_mbytes_per_sec": 0, 00:24:03.330 "w_mbytes_per_sec": 0 00:24:03.330 }, 00:24:03.330 "claimed": false, 00:24:03.330 "zoned": false, 00:24:03.330 "supported_io_types": { 00:24:03.330 "read": true, 00:24:03.330 "write": true, 00:24:03.330 "unmap": false, 00:24:03.330 "flush": true, 00:24:03.330 "reset": true, 00:24:03.330 "nvme_admin": true, 00:24:03.330 "nvme_io": true, 00:24:03.330 "nvme_io_md": false, 00:24:03.330 "write_zeroes": true, 00:24:03.330 "zcopy": false, 00:24:03.330 "get_zone_info": false, 00:24:03.330 "zone_management": false, 00:24:03.330 "zone_append": false, 00:24:03.330 "compare": true, 00:24:03.330 "compare_and_write": true, 00:24:03.330 "abort": true, 00:24:03.330 "seek_hole": false, 00:24:03.330 "seek_data": false, 00:24:03.330 "copy": true, 00:24:03.330 "nvme_iov_md": false 00:24:03.330 }, 00:24:03.330 "memory_domains": [ 00:24:03.330 { 00:24:03.330 "dma_device_id": "system", 00:24:03.330 "dma_device_type": 1 00:24:03.330 } 00:24:03.330 ], 00:24:03.330 "driver_specific": { 00:24:03.330 "nvme": [ 00:24:03.330 { 00:24:03.330 "trid": { 00:24:03.330 "trtype": "TCP", 00:24:03.330 "adrfam": "IPv4", 00:24:03.330 "traddr": "10.0.0.2", 00:24:03.330 "trsvcid": "4421", 00:24:03.330 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:03.330 }, 00:24:03.330 "ctrlr_data": { 00:24:03.330 "cntlid": 3, 00:24:03.330 "vendor_id": "0x8086", 00:24:03.330 "model_number": "SPDK bdev Controller", 00:24:03.330 "serial_number": "00000000000000000000", 00:24:03.330 "firmware_revision": "25.01", 00:24:03.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.330 "oacs": { 00:24:03.330 "security": 0, 00:24:03.330 "format": 0, 00:24:03.330 "firmware": 0, 00:24:03.330 "ns_manage": 0 00:24:03.330 }, 00:24:03.330 "multi_ctrlr": true, 00:24:03.330 "ana_reporting": false 00:24:03.330 }, 00:24:03.330 "vs": { 00:24:03.330 "nvme_version": "1.3" 00:24:03.330 }, 00:24:03.330 "ns_data": { 00:24:03.330 "id": 1, 00:24:03.330 "can_share": true 00:24:03.330 } 00:24:03.330 } 00:24:03.330 ], 00:24:03.330 "mp_policy": "active_passive" 00:24:03.330 } 00:24:03.330 } 00:24:03.330 ] 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.8YWbUfN2pm 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.330 rmmod nvme_tcp 00:24:03.330 rmmod nvme_fabrics 00:24:03.330 rmmod nvme_keyring 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3521081 ']' 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3521081 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3521081 ']' 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3521081 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3521081 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3521081' 00:24:03.330 killing process with pid 3521081 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3521081 00:24:03.330 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3521081 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.590 11:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.136 00:24:06.136 real 0m12.667s 00:24:06.136 user 0m4.574s 00:24:06.136 sys 0m6.650s 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.136 ************************************ 00:24:06.136 END TEST nvmf_async_init 00:24:06.136 ************************************ 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.136 ************************************ 00:24:06.136 START TEST dma 00:24:06.136 ************************************ 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:06.136 * Looking for test storage... 00:24:06.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:06.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.136 --rc genhtml_branch_coverage=1 00:24:06.136 --rc genhtml_function_coverage=1 00:24:06.136 --rc genhtml_legend=1 00:24:06.136 --rc geninfo_all_blocks=1 00:24:06.136 --rc geninfo_unexecuted_blocks=1 00:24:06.136 00:24:06.136 ' 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:06.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.136 --rc genhtml_branch_coverage=1 00:24:06.136 --rc genhtml_function_coverage=1 00:24:06.136 --rc genhtml_legend=1 00:24:06.136 --rc geninfo_all_blocks=1 00:24:06.136 --rc geninfo_unexecuted_blocks=1 00:24:06.136 00:24:06.136 ' 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:06.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.136 --rc genhtml_branch_coverage=1 00:24:06.136 --rc genhtml_function_coverage=1 00:24:06.136 --rc genhtml_legend=1 00:24:06.136 --rc geninfo_all_blocks=1 00:24:06.136 --rc geninfo_unexecuted_blocks=1 00:24:06.136 00:24:06.136 ' 00:24:06.136 11:23:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:06.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.136 --rc genhtml_branch_coverage=1 00:24:06.136 --rc genhtml_function_coverage=1 00:24:06.137 --rc genhtml_legend=1 00:24:06.137 --rc geninfo_all_blocks=1 00:24:06.137 --rc geninfo_unexecuted_blocks=1 00:24:06.137 00:24:06.137 ' 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.137 11:23:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:06.137 00:24:06.137 real 0m0.239s 00:24:06.137 user 0m0.140s 00:24:06.137 sys 0m0.114s 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:06.137 ************************************ 00:24:06.137 END TEST dma 00:24:06.137 ************************************ 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.137 ************************************ 00:24:06.137 START TEST nvmf_identify 00:24:06.137 ************************************ 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:06.137 * Looking for test storage... 00:24:06.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:06.137 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:06.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.138 --rc genhtml_branch_coverage=1 00:24:06.138 --rc genhtml_function_coverage=1 00:24:06.138 --rc genhtml_legend=1 00:24:06.138 --rc geninfo_all_blocks=1 00:24:06.138 --rc geninfo_unexecuted_blocks=1 00:24:06.138 00:24:06.138 ' 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:06.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.138 --rc genhtml_branch_coverage=1 00:24:06.138 --rc genhtml_function_coverage=1 00:24:06.138 --rc genhtml_legend=1 00:24:06.138 --rc geninfo_all_blocks=1 00:24:06.138 --rc geninfo_unexecuted_blocks=1 00:24:06.138 00:24:06.138 ' 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:06.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.138 --rc genhtml_branch_coverage=1 00:24:06.138 --rc genhtml_function_coverage=1 00:24:06.138 --rc genhtml_legend=1 00:24:06.138 --rc geninfo_all_blocks=1 00:24:06.138 --rc geninfo_unexecuted_blocks=1 00:24:06.138 00:24:06.138 ' 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:06.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.138 --rc genhtml_branch_coverage=1 00:24:06.138 --rc genhtml_function_coverage=1 00:24:06.138 --rc genhtml_legend=1 00:24:06.138 --rc geninfo_all_blocks=1 00:24:06.138 --rc geninfo_unexecuted_blocks=1 00:24:06.138 00:24:06.138 ' 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.138 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.399 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.399 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.399 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.399 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.399 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.400 11:23:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:14.548 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.548 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:14.549 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:14.549 Found net devices under 0000:31:00.0: cvl_0_0 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:14.549 Found net devices under 0000:31:00.1: cvl_0_1 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:24:14.549 00:24:14.549 --- 10.0.0.2 ping statistics --- 00:24:14.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.549 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:24:14.549 00:24:14.549 --- 10.0.0.1 ping statistics --- 00:24:14.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.549 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3526174 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3526174 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3526174 ']' 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.549 11:23:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:14.811 [2024-12-06 11:23:20.738661] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:24:14.811 [2024-12-06 11:23:20.738733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.811 [2024-12-06 11:23:20.832880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.811 [2024-12-06 11:23:20.878954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.811 [2024-12-06 11:23:20.878995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.811 [2024-12-06 11:23:20.879004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.811 [2024-12-06 11:23:20.879011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.811 [2024-12-06 11:23:20.879017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.811 [2024-12-06 11:23:20.880733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.811 [2024-12-06 11:23:20.880887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.811 [2024-12-06 11:23:20.880930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.811 [2024-12-06 11:23:20.880930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.387 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.387 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:15.387 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:15.387 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.387 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.647 [2024-12-06 11:23:21.558551] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.647 Malloc0 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.647 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.648 [2024-12-06 11:23:21.662202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.648 [ 00:24:15.648 { 00:24:15.648 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:15.648 "subtype": "Discovery", 00:24:15.648 "listen_addresses": [ 00:24:15.648 { 00:24:15.648 "trtype": "TCP", 00:24:15.648 "adrfam": "IPv4", 00:24:15.648 "traddr": "10.0.0.2", 00:24:15.648 "trsvcid": "4420" 00:24:15.648 } 00:24:15.648 ], 00:24:15.648 "allow_any_host": true, 00:24:15.648 "hosts": [] 00:24:15.648 }, 00:24:15.648 { 00:24:15.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.648 "subtype": "NVMe", 00:24:15.648 "listen_addresses": [ 00:24:15.648 { 00:24:15.648 "trtype": "TCP", 00:24:15.648 "adrfam": "IPv4", 00:24:15.648 "traddr": "10.0.0.2", 00:24:15.648 "trsvcid": "4420" 00:24:15.648 } 00:24:15.648 ], 00:24:15.648 "allow_any_host": true, 00:24:15.648 "hosts": [], 00:24:15.648 "serial_number": "SPDK00000000000001", 00:24:15.648 "model_number": "SPDK bdev Controller", 00:24:15.648 "max_namespaces": 32, 00:24:15.648 "min_cntlid": 1, 00:24:15.648 "max_cntlid": 65519, 00:24:15.648 "namespaces": [ 00:24:15.648 { 00:24:15.648 "nsid": 1, 00:24:15.648 "bdev_name": "Malloc0", 00:24:15.648 "name": "Malloc0", 00:24:15.648 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:15.648 "eui64": "ABCDEF0123456789", 00:24:15.648 "uuid": "22eca44b-07b3-48ac-871a-56b6beae2a78" 00:24:15.648 } 00:24:15.648 ] 00:24:15.648 } 00:24:15.648 ] 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.648 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:15.648 [2024-12-06 11:23:21.726801] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:24:15.648 [2024-12-06 11:23:21.726890] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526521 ] 00:24:15.648 [2024-12-06 11:23:21.781050] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:15.648 [2024-12-06 11:23:21.781109] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:15.648 [2024-12-06 11:23:21.781115] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:15.648 [2024-12-06 11:23:21.781131] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:15.648 [2024-12-06 11:23:21.781141] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:15.648 [2024-12-06 11:23:21.785142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:15.648 [2024-12-06 11:23:21.785180] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1976550 0 00:24:15.648 [2024-12-06 11:23:21.792875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:15.648 [2024-12-06 11:23:21.792888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:15.648 [2024-12-06 11:23:21.792892] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:15.648 [2024-12-06 11:23:21.792896] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:15.648 [2024-12-06 11:23:21.792928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.792934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.792939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1976550) 00:24:15.648 [2024-12-06 11:23:21.792953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:15.648 [2024-12-06 11:23:21.792970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8100, cid 0, qid 0 00:24:15.648 [2024-12-06 11:23:21.800872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.648 [2024-12-06 11:23:21.800881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.648 [2024-12-06 11:23:21.800886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.800890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8100) on tqpair=0x1976550 00:24:15.648 [2024-12-06 11:23:21.800903] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:15.648 [2024-12-06 11:23:21.800910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:15.648 [2024-12-06 11:23:21.800916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:15.648 [2024-12-06 11:23:21.800929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.800936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.800940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1976550) 00:24:15.648 [2024-12-06 11:23:21.800948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.648 [2024-12-06 11:23:21.800962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8100, cid 0, qid 0 00:24:15.648 [2024-12-06 11:23:21.801155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.648 [2024-12-06 11:23:21.801162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.648 [2024-12-06 11:23:21.801166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.801170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8100) on tqpair=0x1976550 00:24:15.648 [2024-12-06 11:23:21.801176] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:15.648 [2024-12-06 11:23:21.801184] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:15.648 [2024-12-06 11:23:21.801191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.801195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.801198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1976550) 00:24:15.648 [2024-12-06 11:23:21.801205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.648 [2024-12-06 11:23:21.801216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8100, cid 0, qid 0 00:24:15.648 [2024-12-06 11:23:21.801410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.648 [2024-12-06 11:23:21.801417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.648 [2024-12-06 11:23:21.801420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.801424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8100) on tqpair=0x1976550 00:24:15.648 [2024-12-06 11:23:21.801430] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:15.648 [2024-12-06 11:23:21.801438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:15.648 [2024-12-06 11:23:21.801444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.801448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.801452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1976550) 00:24:15.648 [2024-12-06 11:23:21.801459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.648 [2024-12-06 11:23:21.801469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8100, cid 0, qid 0 00:24:15.648 [2024-12-06 11:23:21.801658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.648 [2024-12-06 11:23:21.801665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.648 [2024-12-06 11:23:21.801668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.801672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8100) on tqpair=0x1976550 00:24:15.648 [2024-12-06 11:23:21.801677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:15.648 [2024-12-06 11:23:21.801686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.801690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.801694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1976550) 00:24:15.648 [2024-12-06 11:23:21.801705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.648 [2024-12-06 11:23:21.801715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8100, cid 0, qid 0 00:24:15.648 [2024-12-06 11:23:21.801882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.648 [2024-12-06 11:23:21.801889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.648 [2024-12-06 11:23:21.801892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.648 [2024-12-06 11:23:21.801896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8100) on tqpair=0x1976550 00:24:15.648 [2024-12-06 11:23:21.801901] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:15.648 [2024-12-06 11:23:21.801906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:15.649 [2024-12-06 11:23:21.801913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:15.649 [2024-12-06 11:23:21.802022] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:15.649 [2024-12-06 11:23:21.802027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:15.649 [2024-12-06 11:23:21.802035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.649 [2024-12-06 11:23:21.802039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.649 [2024-12-06 11:23:21.802043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1976550) 00:24:15.649 [2024-12-06 11:23:21.802049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.649 [2024-12-06 11:23:21.802060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8100, cid 0, qid 0 00:24:15.649 [2024-12-06 11:23:21.802244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.649 [2024-12-06 11:23:21.802250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.649 [2024-12-06 11:23:21.802254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.649 [2024-12-06 11:23:21.802258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8100) on tqpair=0x1976550 00:24:15.649 [2024-12-06 11:23:21.802263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:15.649 [2024-12-06 11:23:21.802272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.649 [2024-12-06 11:23:21.802276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.649 [2024-12-06 11:23:21.802280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1976550) 00:24:15.649 [2024-12-06 11:23:21.802286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.649 [2024-12-06 11:23:21.802296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8100, cid 0, qid 0 00:24:15.649 [2024-12-06 11:23:21.802467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.649 [2024-12-06 11:23:21.802473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.649 [2024-12-06 11:23:21.802477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.649 [2024-12-06 11:23:21.802481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8100) on tqpair=0x1976550 00:24:15.649 [2024-12-06 11:23:21.802486] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:15.649 [2024-12-06 11:23:21.802490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:15.649 [2024-12-06 11:23:21.802500] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:15.649 [2024-12-06 11:23:21.802508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:15.649 [2024-12-06 11:23:21.802517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.649 [2024-12-06 11:23:21.802521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1976550) 00:24:15.649 [2024-12-06 11:23:21.802528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.649 [2024-12-06 11:23:21.802539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8100, cid 0, qid 0 00:24:15.649 [2024-12-06 11:23:21.802721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.649 [2024-12-06 11:23:21.802728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.649 [2024-12-06 11:23:21.802732] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.649 [2024-12-06 11:23:21.802736] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1976550): datao=0, datal=4096, cccid=0 00:24:15.649 [2024-12-06 11:23:21.802741] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19d8100) on tqpair(0x1976550): expected_datao=0, payload_size=4096 00:24:15.649 [2024-12-06 11:23:21.802746] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.649 [2024-12-06 11:23:21.802782] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.649 [2024-12-06 11:23:21.802786] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.845868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.911 [2024-12-06 11:23:21.845879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.911 [2024-12-06 11:23:21.845882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.845887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8100) on tqpair=0x1976550 00:24:15.911 [2024-12-06 11:23:21.845895] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:15.911 [2024-12-06 11:23:21.845904] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:15.911 [2024-12-06 11:23:21.845908] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:15.911 [2024-12-06 11:23:21.845914] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:15.911 [2024-12-06 11:23:21.845918] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:15.911 [2024-12-06 11:23:21.845923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:15.911 [2024-12-06 11:23:21.845932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:15.911 [2024-12-06 11:23:21.845939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.845943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.845947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1976550) 00:24:15.911 [2024-12-06 11:23:21.845955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:15.911 [2024-12-06 11:23:21.845968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8100, cid 0, qid 0 00:24:15.911 [2024-12-06 11:23:21.846138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.911 [2024-12-06 11:23:21.846145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.911 [2024-12-06 11:23:21.846151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8100) on tqpair=0x1976550 00:24:15.911 [2024-12-06 11:23:21.846163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1976550) 00:24:15.911 [2024-12-06 11:23:21.846177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.911 [2024-12-06 11:23:21.846183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1976550) 00:24:15.911 [2024-12-06 11:23:21.846197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.911 [2024-12-06 11:23:21.846203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1976550) 00:24:15.911 [2024-12-06 11:23:21.846216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.911 [2024-12-06 11:23:21.846222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1976550) 00:24:15.911 [2024-12-06 11:23:21.846235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.911 [2024-12-06 11:23:21.846240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:15.911 [2024-12-06 11:23:21.846250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:15.911 [2024-12-06 11:23:21.846257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1976550) 00:24:15.911 [2024-12-06 11:23:21.846268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.911 [2024-12-06 11:23:21.846280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8100, cid 0, qid 0 00:24:15.911 [2024-12-06 11:23:21.846285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8280, cid 1, qid 0 00:24:15.911 [2024-12-06 11:23:21.846290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8400, cid 2, qid 0 00:24:15.911 [2024-12-06 11:23:21.846295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8580, cid 3, qid 0 00:24:15.911 [2024-12-06 11:23:21.846299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8700, cid 4, qid 0 00:24:15.911 [2024-12-06 11:23:21.846520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.911 [2024-12-06 11:23:21.846527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.911 [2024-12-06 11:23:21.846530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8700) on tqpair=0x1976550 00:24:15.911 [2024-12-06 11:23:21.846539] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:15.911 [2024-12-06 11:23:21.846544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:15.911 [2024-12-06 11:23:21.846557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1976550) 00:24:15.911 [2024-12-06 11:23:21.846568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.911 [2024-12-06 11:23:21.846578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8700, cid 4, qid 0 00:24:15.911 [2024-12-06 11:23:21.846758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.911 [2024-12-06 11:23:21.846765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.911 [2024-12-06 11:23:21.846768] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846772] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1976550): datao=0, datal=4096, cccid=4 00:24:15.911 [2024-12-06 11:23:21.846777] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19d8700) on tqpair(0x1976550): expected_datao=0, payload_size=4096 00:24:15.911 [2024-12-06 11:23:21.846781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846798] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846802] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.911 [2024-12-06 11:23:21.846957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.911 [2024-12-06 11:23:21.846961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.846965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8700) on tqpair=0x1976550 00:24:15.911 [2024-12-06 11:23:21.846977] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:15.911 [2024-12-06 11:23:21.846999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.847004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1976550) 00:24:15.911 [2024-12-06 11:23:21.847010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.911 [2024-12-06 11:23:21.847017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.847021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.847024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1976550) 00:24:15.911 [2024-12-06 11:23:21.847030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.911 [2024-12-06 11:23:21.847044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8700, cid 4, qid 0 00:24:15.911 [2024-12-06 11:23:21.847049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8880, cid 5, qid 0 00:24:15.911 [2024-12-06 11:23:21.847244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.911 [2024-12-06 11:23:21.847250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.911 [2024-12-06 11:23:21.847254] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.847258] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1976550): datao=0, datal=1024, cccid=4 00:24:15.911 [2024-12-06 11:23:21.847262] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19d8700) on tqpair(0x1976550): expected_datao=0, payload_size=1024 00:24:15.911 [2024-12-06 11:23:21.847266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.847273] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.847277] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.847282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.911 [2024-12-06 11:23:21.847290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.911 [2024-12-06 11:23:21.847294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.847298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8880) on tqpair=0x1976550 00:24:15.911 [2024-12-06 11:23:21.888022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.911 [2024-12-06 11:23:21.888031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.911 [2024-12-06 11:23:21.888035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.888039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8700) on tqpair=0x1976550 00:24:15.911 [2024-12-06 11:23:21.888050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.888054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1976550) 00:24:15.911 [2024-12-06 11:23:21.888060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.911 [2024-12-06 11:23:21.888075] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8700, cid 4, qid 0 00:24:15.911 [2024-12-06 11:23:21.888359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.911 [2024-12-06 11:23:21.888366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.911 [2024-12-06 11:23:21.888369] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.888373] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1976550): datao=0, datal=3072, cccid=4 00:24:15.911 [2024-12-06 11:23:21.888378] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19d8700) on tqpair(0x1976550): expected_datao=0, payload_size=3072 00:24:15.911 [2024-12-06 11:23:21.888382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.911 [2024-12-06 11:23:21.888397] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.912 [2024-12-06 11:23:21.888401] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.912 [2024-12-06 11:23:21.933873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.912 [2024-12-06 11:23:21.933882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.912 [2024-12-06 11:23:21.933886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.912 [2024-12-06 11:23:21.933890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8700) on tqpair=0x1976550 00:24:15.912 [2024-12-06 11:23:21.933899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.912 [2024-12-06 11:23:21.933903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1976550) 00:24:15.912 [2024-12-06 11:23:21.933909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.912 [2024-12-06 11:23:21.933924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8700, cid 4, qid 0 00:24:15.912 [2024-12-06 11:23:21.934100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.912 [2024-12-06 11:23:21.934106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.912 [2024-12-06 11:23:21.934110] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.912 [2024-12-06 11:23:21.934113] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1976550): datao=0, datal=8, cccid=4 00:24:15.912 [2024-12-06 11:23:21.934118] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19d8700) on tqpair(0x1976550): expected_datao=0, payload_size=8 00:24:15.912 [2024-12-06 11:23:21.934122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.912 [2024-12-06 11:23:21.934129] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.912 [2024-12-06 11:23:21.934132] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.912 [2024-12-06 11:23:21.976019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.912 [2024-12-06 11:23:21.976028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.912 [2024-12-06 11:23:21.976035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.912 [2024-12-06 11:23:21.976039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8700) on tqpair=0x1976550 00:24:15.912 ===================================================== 00:24:15.912 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:15.912 ===================================================== 00:24:15.912 Controller Capabilities/Features 00:24:15.912 ================================ 00:24:15.912 Vendor ID: 0000 00:24:15.912 Subsystem Vendor ID: 0000 00:24:15.912 Serial Number: .................... 00:24:15.912 Model Number: ........................................ 00:24:15.912 Firmware Version: 25.01 00:24:15.912 Recommended Arb Burst: 0 00:24:15.912 IEEE OUI Identifier: 00 00 00 00:24:15.912 Multi-path I/O 00:24:15.912 May have multiple subsystem ports: No 00:24:15.912 May have multiple controllers: No 00:24:15.912 Associated with SR-IOV VF: No 00:24:15.912 Max Data Transfer Size: 131072 00:24:15.912 Max Number of Namespaces: 0 00:24:15.912 Max Number of I/O Queues: 1024 00:24:15.912 NVMe Specification Version (VS): 1.3 00:24:15.912 NVMe Specification Version (Identify): 1.3 00:24:15.912 Maximum Queue Entries: 128 00:24:15.912 Contiguous Queues Required: Yes 00:24:15.912 Arbitration Mechanisms Supported 00:24:15.912 Weighted Round Robin: Not Supported 00:24:15.912 Vendor Specific: Not Supported 00:24:15.912 Reset Timeout: 15000 ms 00:24:15.912 Doorbell Stride: 4 bytes 00:24:15.912 NVM Subsystem Reset: Not Supported 00:24:15.912 Command Sets Supported 00:24:15.912 NVM Command Set: Supported 00:24:15.912 Boot Partition: Not Supported 00:24:15.912 Memory Page Size Minimum: 4096 bytes 00:24:15.912 Memory Page Size Maximum: 4096 bytes 00:24:15.912 Persistent Memory Region: Not Supported 00:24:15.912 Optional Asynchronous Events Supported 00:24:15.912 Namespace Attribute Notices: Not Supported 00:24:15.912 Firmware Activation Notices: Not Supported 00:24:15.912 ANA Change Notices: Not Supported 00:24:15.912 PLE Aggregate Log Change Notices: Not Supported 00:24:15.912 LBA Status Info Alert Notices: Not Supported 00:24:15.912 EGE Aggregate Log Change Notices: Not Supported 00:24:15.912 Normal NVM Subsystem Shutdown event: Not Supported 00:24:15.912 Zone Descriptor Change Notices: Not Supported 00:24:15.912 Discovery Log Change Notices: Supported 00:24:15.912 Controller Attributes 00:24:15.912 128-bit Host Identifier: Not Supported 00:24:15.912 Non-Operational Permissive Mode: Not Supported 00:24:15.912 NVM Sets: Not Supported 00:24:15.912 Read Recovery Levels: Not Supported 00:24:15.912 Endurance Groups: Not Supported 00:24:15.912 Predictable Latency Mode: Not Supported 00:24:15.912 Traffic Based Keep ALive: Not Supported 00:24:15.912 Namespace Granularity: Not Supported 00:24:15.912 SQ Associations: Not Supported 00:24:15.912 UUID List: Not Supported 00:24:15.912 Multi-Domain Subsystem: Not Supported 00:24:15.912 Fixed Capacity Management: Not Supported 00:24:15.912 Variable Capacity Management: Not Supported 00:24:15.912 Delete Endurance Group: Not Supported 00:24:15.912 Delete NVM Set: Not Supported 00:24:15.912 Extended LBA Formats Supported: Not Supported 00:24:15.912 Flexible Data Placement Supported: Not Supported 00:24:15.912 00:24:15.912 Controller Memory Buffer Support 00:24:15.912 ================================ 00:24:15.912 Supported: No 00:24:15.912 00:24:15.912 Persistent Memory Region Support 00:24:15.912 ================================ 00:24:15.912 Supported: No 00:24:15.912 00:24:15.912 Admin Command Set Attributes 00:24:15.912 ============================ 00:24:15.912 Security Send/Receive: Not Supported 00:24:15.912 Format NVM: Not Supported 00:24:15.912 Firmware Activate/Download: Not Supported 00:24:15.912 Namespace Management: Not Supported 00:24:15.912 Device Self-Test: Not Supported 00:24:15.912 Directives: Not Supported 00:24:15.912 NVMe-MI: Not Supported 00:24:15.912 Virtualization Management: Not Supported 00:24:15.912 Doorbell Buffer Config: Not Supported 00:24:15.912 Get LBA Status Capability: Not Supported 00:24:15.912 Command & Feature Lockdown Capability: Not Supported 00:24:15.912 Abort Command Limit: 1 00:24:15.912 Async Event Request Limit: 4 00:24:15.912 Number of Firmware Slots: N/A 00:24:15.912 Firmware Slot 1 Read-Only: N/A 00:24:15.912 Firmware Activation Without Reset: N/A 00:24:15.912 Multiple Update Detection Support: N/A 00:24:15.912 Firmware Update Granularity: No Information Provided 00:24:15.912 Per-Namespace SMART Log: No 00:24:15.912 Asymmetric Namespace Access Log Page: Not Supported 00:24:15.912 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:15.912 Command Effects Log Page: Not Supported 00:24:15.912 Get Log Page Extended Data: Supported 00:24:15.912 Telemetry Log Pages: Not Supported 00:24:15.912 Persistent Event Log Pages: Not Supported 00:24:15.912 Supported Log Pages Log Page: May Support 00:24:15.912 Commands Supported & Effects Log Page: Not Supported 00:24:15.912 Feature Identifiers & Effects Log Page:May Support 00:24:15.912 NVMe-MI Commands & Effects Log Page: May Support 00:24:15.912 Data Area 4 for Telemetry Log: Not Supported 00:24:15.912 Error Log Page Entries Supported: 128 00:24:15.912 Keep Alive: Not Supported 00:24:15.912 00:24:15.912 NVM Command Set Attributes 00:24:15.912 ========================== 00:24:15.912 Submission Queue Entry Size 00:24:15.912 Max: 1 00:24:15.912 Min: 1 00:24:15.912 Completion Queue Entry Size 00:24:15.912 Max: 1 00:24:15.912 Min: 1 00:24:15.912 Number of Namespaces: 0 00:24:15.912 Compare Command: Not Supported 00:24:15.912 Write Uncorrectable Command: Not Supported 00:24:15.912 Dataset Management Command: Not Supported 00:24:15.912 Write Zeroes Command: Not Supported 00:24:15.912 Set Features Save Field: Not Supported 00:24:15.912 Reservations: Not Supported 00:24:15.912 Timestamp: Not Supported 00:24:15.912 Copy: Not Supported 00:24:15.912 Volatile Write Cache: Not Present 00:24:15.912 Atomic Write Unit (Normal): 1 00:24:15.912 Atomic Write Unit (PFail): 1 00:24:15.912 Atomic Compare & Write Unit: 1 00:24:15.912 Fused Compare & Write: Supported 00:24:15.912 Scatter-Gather List 00:24:15.912 SGL Command Set: Supported 00:24:15.912 SGL Keyed: Supported 00:24:15.912 SGL Bit Bucket Descriptor: Not Supported 00:24:15.912 SGL Metadata Pointer: Not Supported 00:24:15.912 Oversized SGL: Not Supported 00:24:15.912 SGL Metadata Address: Not Supported 00:24:15.912 SGL Offset: Supported 00:24:15.912 Transport SGL Data Block: Not Supported 00:24:15.912 Replay Protected Memory Block: Not Supported 00:24:15.912 00:24:15.912 Firmware Slot Information 00:24:15.912 ========================= 00:24:15.912 Active slot: 0 00:24:15.912 00:24:15.912 00:24:15.912 Error Log 00:24:15.912 ========= 00:24:15.912 00:24:15.912 Active Namespaces 00:24:15.912 ================= 00:24:15.912 Discovery Log Page 00:24:15.912 ================== 00:24:15.912 Generation Counter: 2 00:24:15.912 Number of Records: 2 00:24:15.912 Record Format: 0 00:24:15.912 00:24:15.912 Discovery Log Entry 0 00:24:15.912 ---------------------- 00:24:15.912 Transport Type: 3 (TCP) 00:24:15.912 Address Family: 1 (IPv4) 00:24:15.912 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:15.912 Entry Flags: 00:24:15.912 Duplicate Returned Information: 1 00:24:15.912 Explicit Persistent Connection Support for Discovery: 1 00:24:15.912 Transport Requirements: 00:24:15.912 Secure Channel: Not Required 00:24:15.912 Port ID: 0 (0x0000) 00:24:15.912 Controller ID: 65535 (0xffff) 00:24:15.912 Admin Max SQ Size: 128 00:24:15.912 Transport Service Identifier: 4420 00:24:15.912 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:15.912 Transport Address: 10.0.0.2 00:24:15.912 Discovery Log Entry 1 00:24:15.912 ---------------------- 00:24:15.912 Transport Type: 3 (TCP) 00:24:15.912 Address Family: 1 (IPv4) 00:24:15.912 Subsystem Type: 2 (NVM Subsystem) 00:24:15.912 Entry Flags: 00:24:15.912 Duplicate Returned Information: 0 00:24:15.912 Explicit Persistent Connection Support for Discovery: 0 00:24:15.912 Transport Requirements: 00:24:15.912 Secure Channel: Not Required 00:24:15.912 Port ID: 0 (0x0000) 00:24:15.912 Controller ID: 65535 (0xffff) 00:24:15.912 Admin Max SQ Size: 128 00:24:15.912 Transport Service Identifier: 4420 00:24:15.912 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:15.912 Transport Address: 10.0.0.2 [2024-12-06 11:23:21.976124] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:15.912 [2024-12-06 11:23:21.976135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8100) on tqpair=0x1976550 00:24:15.912 [2024-12-06 11:23:21.976142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.913 [2024-12-06 11:23:21.976148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8280) on tqpair=0x1976550 00:24:15.913 [2024-12-06 11:23:21.976153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.913 [2024-12-06 11:23:21.976158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8400) on tqpair=0x1976550 00:24:15.913 [2024-12-06 11:23:21.976162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.913 [2024-12-06 11:23:21.976167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8580) on tqpair=0x1976550 00:24:15.913 [2024-12-06 11:23:21.976172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.913 [2024-12-06 11:23:21.976183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.976187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.976190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1976550) 00:24:15.913 [2024-12-06 11:23:21.976198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.913 [2024-12-06 11:23:21.976212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8580, cid 3, qid 0 00:24:15.913 [2024-12-06 11:23:21.976376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.913 [2024-12-06 11:23:21.976382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.913 [2024-12-06 11:23:21.976386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.976390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8580) on tqpair=0x1976550 00:24:15.913 [2024-12-06 11:23:21.976397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.976401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.976404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1976550) 00:24:15.913 [2024-12-06 11:23:21.976411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.913 [2024-12-06 11:23:21.976424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8580, cid 3, qid 0 00:24:15.913 [2024-12-06 11:23:21.976633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.913 [2024-12-06 11:23:21.976639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.913 [2024-12-06 11:23:21.976643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.976647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8580) on tqpair=0x1976550 00:24:15.913 [2024-12-06 11:23:21.976652] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:15.913 [2024-12-06 11:23:21.976656] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:15.913 [2024-12-06 11:23:21.976666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.976669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.976673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1976550) 00:24:15.913 [2024-12-06 11:23:21.976682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.913 [2024-12-06 11:23:21.976692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8580, cid 3, qid 0 00:24:15.913 [2024-12-06 11:23:21.976871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.913 [2024-12-06 11:23:21.976877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.913 [2024-12-06 11:23:21.976881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.976885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8580) on tqpair=0x1976550 00:24:15.913 [2024-12-06 11:23:21.976895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.976899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.976903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1976550) 00:24:15.913 [2024-12-06 11:23:21.976909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.913 [2024-12-06 11:23:21.976920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8580, cid 3, qid 0 00:24:15.913 [2024-12-06 11:23:21.977087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.913 [2024-12-06 11:23:21.977094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.913 [2024-12-06 11:23:21.977097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8580) on tqpair=0x1976550 00:24:15.913 [2024-12-06 11:23:21.977110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1976550) 00:24:15.913 [2024-12-06 11:23:21.977125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.913 [2024-12-06 11:23:21.977135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8580, cid 3, qid 0 00:24:15.913 [2024-12-06 11:23:21.977318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.913 [2024-12-06 11:23:21.977324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.913 [2024-12-06 11:23:21.977327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8580) on tqpair=0x1976550 00:24:15.913 [2024-12-06 11:23:21.977341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1976550) 00:24:15.913 [2024-12-06 11:23:21.977355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.913 [2024-12-06 11:23:21.977365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8580, cid 3, qid 0 00:24:15.913 [2024-12-06 11:23:21.977528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.913 [2024-12-06 11:23:21.977535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.913 [2024-12-06 11:23:21.977538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8580) on tqpair=0x1976550 00:24:15.913 [2024-12-06 11:23:21.977551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1976550) 00:24:15.913 [2024-12-06 11:23:21.977565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.913 [2024-12-06 11:23:21.977577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8580, cid 3, qid 0 00:24:15.913 [2024-12-06 11:23:21.977751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.913 [2024-12-06 11:23:21.977757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.913 [2024-12-06 11:23:21.977761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8580) on tqpair=0x1976550 00:24:15.913 [2024-12-06 11:23:21.977774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.977782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1976550) 00:24:15.913 [2024-12-06 11:23:21.977789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.913 [2024-12-06 11:23:21.977798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19d8580, cid 3, qid 0 00:24:15.913 [2024-12-06 11:23:21.981868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.913 [2024-12-06 11:23:21.981877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.913 [2024-12-06 11:23:21.981880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.913 [2024-12-06 11:23:21.981884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19d8580) on tqpair=0x1976550 00:24:15.913 [2024-12-06 11:23:21.981892] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:24:15.913 00:24:15.913 11:23:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:15.913 [2024-12-06 11:23:22.025977] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:24:15.913 [2024-12-06 11:23:22.026020] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526529 ] 00:24:16.177 [2024-12-06 11:23:22.079946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:16.177 [2024-12-06 11:23:22.080000] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:16.177 [2024-12-06 11:23:22.080005] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:16.177 [2024-12-06 11:23:22.080024] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:16.177 [2024-12-06 11:23:22.080032] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:16.177 [2024-12-06 11:23:22.084071] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:16.177 [2024-12-06 11:23:22.084103] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19e3550 0 00:24:16.177 [2024-12-06 11:23:22.091872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:16.177 [2024-12-06 11:23:22.091892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:16.177 [2024-12-06 11:23:22.091896] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:16.177 [2024-12-06 11:23:22.091900] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:16.177 [2024-12-06 11:23:22.091932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.177 [2024-12-06 11:23:22.091941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.177 [2024-12-06 11:23:22.091945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e3550) 00:24:16.177 [2024-12-06 11:23:22.091957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:16.177 [2024-12-06 11:23:22.091974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45100, cid 0, qid 0 00:24:16.177 [2024-12-06 11:23:22.099872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.177 [2024-12-06 11:23:22.099882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.177 [2024-12-06 11:23:22.099886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.177 [2024-12-06 11:23:22.099891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45100) on tqpair=0x19e3550 00:24:16.177 [2024-12-06 11:23:22.099903] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:16.177 [2024-12-06 11:23:22.099912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:16.177 [2024-12-06 11:23:22.099918] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:16.177 [2024-12-06 11:23:22.099930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.177 [2024-12-06 11:23:22.099934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.177 [2024-12-06 11:23:22.099938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e3550) 00:24:16.177 [2024-12-06 11:23:22.099945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.177 [2024-12-06 11:23:22.099961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45100, cid 0, qid 0 00:24:16.177 [2024-12-06 11:23:22.100169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.177 [2024-12-06 11:23:22.100176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.177 [2024-12-06 11:23:22.100180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.177 [2024-12-06 11:23:22.100184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45100) on tqpair=0x19e3550 00:24:16.177 [2024-12-06 11:23:22.100189] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:16.177 [2024-12-06 11:23:22.100196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:16.177 [2024-12-06 11:23:22.100204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.177 [2024-12-06 11:23:22.100211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.177 [2024-12-06 11:23:22.100215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e3550) 00:24:16.177 [2024-12-06 11:23:22.100222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.177 [2024-12-06 11:23:22.100233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45100, cid 0, qid 0 00:24:16.177 [2024-12-06 11:23:22.100418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.177 [2024-12-06 11:23:22.100425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.177 [2024-12-06 11:23:22.100428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.100432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45100) on tqpair=0x19e3550 00:24:16.178 [2024-12-06 11:23:22.100437] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:16.178 [2024-12-06 11:23:22.100445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:16.178 [2024-12-06 11:23:22.100452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.100457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.100466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.100473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.178 [2024-12-06 11:23:22.100484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45100, cid 0, qid 0 00:24:16.178 [2024-12-06 11:23:22.100716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.178 [2024-12-06 11:23:22.100723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.178 [2024-12-06 11:23:22.100726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.100730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45100) on tqpair=0x19e3550 00:24:16.178 [2024-12-06 11:23:22.100735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:16.178 [2024-12-06 11:23:22.100745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.100749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.100752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.100760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.178 [2024-12-06 11:23:22.100773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45100, cid 0, qid 0 00:24:16.178 [2024-12-06 11:23:22.101019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.178 [2024-12-06 11:23:22.101026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.178 [2024-12-06 11:23:22.101030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.101033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45100) on tqpair=0x19e3550 00:24:16.178 [2024-12-06 11:23:22.101038] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:16.178 [2024-12-06 11:23:22.101043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:16.178 [2024-12-06 11:23:22.101051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:16.178 [2024-12-06 11:23:22.101160] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:16.178 [2024-12-06 11:23:22.101166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:16.178 [2024-12-06 11:23:22.101174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.101177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.101181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.101188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.178 [2024-12-06 11:23:22.101199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45100, cid 0, qid 0 00:24:16.178 [2024-12-06 11:23:22.101418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.178 [2024-12-06 11:23:22.101425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.178 [2024-12-06 11:23:22.101428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.101432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45100) on tqpair=0x19e3550 00:24:16.178 [2024-12-06 11:23:22.101437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:16.178 [2024-12-06 11:23:22.101446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.101452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.101456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.101465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.178 [2024-12-06 11:23:22.101476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45100, cid 0, qid 0 00:24:16.178 [2024-12-06 11:23:22.101670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.178 [2024-12-06 11:23:22.101677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.178 [2024-12-06 11:23:22.101681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.101685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45100) on tqpair=0x19e3550 00:24:16.178 [2024-12-06 11:23:22.101689] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:16.178 [2024-12-06 11:23:22.101694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:16.178 [2024-12-06 11:23:22.101702] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:16.178 [2024-12-06 11:23:22.101713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:16.178 [2024-12-06 11:23:22.101725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.101728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.101735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.178 [2024-12-06 11:23:22.101746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45100, cid 0, qid 0 00:24:16.178 [2024-12-06 11:23:22.101927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:16.178 [2024-12-06 11:23:22.101935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:16.178 [2024-12-06 11:23:22.101938] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.101942] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e3550): datao=0, datal=4096, cccid=0 00:24:16.178 [2024-12-06 11:23:22.101947] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a45100) on tqpair(0x19e3550): expected_datao=0, payload_size=4096 00:24:16.178 [2024-12-06 11:23:22.101951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.101958] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.101963] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.178 [2024-12-06 11:23:22.102137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.178 [2024-12-06 11:23:22.102140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102144] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45100) on tqpair=0x19e3550 00:24:16.178 [2024-12-06 11:23:22.102151] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:16.178 [2024-12-06 11:23:22.102159] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:16.178 [2024-12-06 11:23:22.102164] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:16.178 [2024-12-06 11:23:22.102172] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:16.178 [2024-12-06 11:23:22.102177] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:16.178 [2024-12-06 11:23:22.102191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:16.178 [2024-12-06 11:23:22.102200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:16.178 [2024-12-06 11:23:22.102207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.102226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:16.178 [2024-12-06 11:23:22.102238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45100, cid 0, qid 0 00:24:16.178 [2024-12-06 11:23:22.102475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.178 [2024-12-06 11:23:22.102483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.178 [2024-12-06 11:23:22.102488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45100) on tqpair=0x19e3550 00:24:16.178 [2024-12-06 11:23:22.102499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.102512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.178 [2024-12-06 11:23:22.102519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.102536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.178 [2024-12-06 11:23:22.102542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.102555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.178 [2024-12-06 11:23:22.102561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.102575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.178 [2024-12-06 11:23:22.102580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:16.178 [2024-12-06 11:23:22.102593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:16.178 [2024-12-06 11:23:22.102600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.102613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.178 [2024-12-06 11:23:22.102626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45100, cid 0, qid 0 00:24:16.178 [2024-12-06 11:23:22.102633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45280, cid 1, qid 0 00:24:16.178 [2024-12-06 11:23:22.102638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45400, cid 2, qid 0 00:24:16.178 [2024-12-06 11:23:22.102643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.178 [2024-12-06 11:23:22.102649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45700, cid 4, qid 0 00:24:16.178 [2024-12-06 11:23:22.102876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.178 [2024-12-06 11:23:22.102883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.178 [2024-12-06 11:23:22.102887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45700) on tqpair=0x19e3550 00:24:16.178 [2024-12-06 11:23:22.102896] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:16.178 [2024-12-06 11:23:22.102901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:16.178 [2024-12-06 11:23:22.102909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:16.178 [2024-12-06 11:23:22.102918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:16.178 [2024-12-06 11:23:22.102924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.102932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.102938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:16.178 [2024-12-06 11:23:22.102949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45700, cid 4, qid 0 00:24:16.178 [2024-12-06 11:23:22.103171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.178 [2024-12-06 11:23:22.103177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.178 [2024-12-06 11:23:22.103181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.103185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45700) on tqpair=0x19e3550 00:24:16.178 [2024-12-06 11:23:22.103251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:16.178 [2024-12-06 11:23:22.103261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:16.178 [2024-12-06 11:23:22.103271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.103276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e3550) 00:24:16.178 [2024-12-06 11:23:22.103282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.178 [2024-12-06 11:23:22.103293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45700, cid 4, qid 0 00:24:16.178 [2024-12-06 11:23:22.103482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:16.178 [2024-12-06 11:23:22.103489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:16.178 [2024-12-06 11:23:22.103493] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.103496] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e3550): datao=0, datal=4096, cccid=4 00:24:16.178 [2024-12-06 11:23:22.103501] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a45700) on tqpair(0x19e3550): expected_datao=0, payload_size=4096 00:24:16.178 [2024-12-06 11:23:22.103505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.103527] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.103533] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:16.178 [2024-12-06 11:23:22.103674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.178 [2024-12-06 11:23:22.103681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.179 [2024-12-06 11:23:22.103684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.103688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45700) on tqpair=0x19e3550 00:24:16.179 [2024-12-06 11:23:22.103697] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:16.179 [2024-12-06 11:23:22.103713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:16.179 [2024-12-06 11:23:22.103725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:16.179 [2024-12-06 11:23:22.103732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.103736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e3550) 00:24:16.179 [2024-12-06 11:23:22.103742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.179 [2024-12-06 11:23:22.103753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45700, cid 4, qid 0 00:24:16.179 [2024-12-06 11:23:22.107870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:16.179 [2024-12-06 11:23:22.107879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:16.179 [2024-12-06 11:23:22.107882] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.107886] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e3550): datao=0, datal=4096, cccid=4 00:24:16.179 [2024-12-06 11:23:22.107890] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a45700) on tqpair(0x19e3550): expected_datao=0, payload_size=4096 00:24:16.179 [2024-12-06 11:23:22.107895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.107901] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.107905] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.107911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.179 [2024-12-06 11:23:22.107916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.179 [2024-12-06 11:23:22.107920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.107924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45700) on tqpair=0x19e3550 00:24:16.179 [2024-12-06 11:23:22.107937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:16.179 [2024-12-06 11:23:22.107948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:16.179 [2024-12-06 11:23:22.107959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.107963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e3550) 00:24:16.179 [2024-12-06 11:23:22.107970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.179 [2024-12-06 11:23:22.107982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45700, cid 4, qid 0 00:24:16.179 [2024-12-06 11:23:22.108169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:16.179 [2024-12-06 11:23:22.108176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:16.179 [2024-12-06 11:23:22.108180] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.108186] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e3550): datao=0, datal=4096, cccid=4 00:24:16.179 [2024-12-06 11:23:22.108191] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a45700) on tqpair(0x19e3550): expected_datao=0, payload_size=4096 00:24:16.179 [2024-12-06 11:23:22.108195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.108211] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.108218] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.108409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.179 [2024-12-06 11:23:22.108416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.179 [2024-12-06 11:23:22.108419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.108423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45700) on tqpair=0x19e3550 00:24:16.179 [2024-12-06 11:23:22.108430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:16.179 [2024-12-06 11:23:22.108439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:16.179 [2024-12-06 11:23:22.108447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:16.179 [2024-12-06 11:23:22.108458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:16.179 [2024-12-06 11:23:22.108464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:16.179 [2024-12-06 11:23:22.108470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:16.179 [2024-12-06 11:23:22.108475] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:16.179 [2024-12-06 11:23:22.108480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:16.179 [2024-12-06 11:23:22.108485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:16.179 [2024-12-06 11:23:22.108499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.108503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e3550) 00:24:16.179 [2024-12-06 11:23:22.108510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.179 [2024-12-06 11:23:22.108517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.108521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.108524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19e3550) 00:24:16.179 [2024-12-06 11:23:22.108530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.179 [2024-12-06 11:23:22.108544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45700, cid 4, qid 0 00:24:16.179 [2024-12-06 11:23:22.108549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45880, cid 5, qid 0 00:24:16.179 [2024-12-06 11:23:22.108745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.179 [2024-12-06 11:23:22.108752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.179 [2024-12-06 11:23:22.108755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.108760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45700) on tqpair=0x19e3550 00:24:16.179 [2024-12-06 11:23:22.108770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.179 [2024-12-06 11:23:22.108778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.179 [2024-12-06 11:23:22.108782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.108786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45880) on tqpair=0x19e3550 00:24:16.179 [2024-12-06 11:23:22.108795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.108799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19e3550) 00:24:16.179 [2024-12-06 11:23:22.108805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.179 [2024-12-06 11:23:22.108819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45880, cid 5, qid 0 00:24:16.179 [2024-12-06 11:23:22.109020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.179 [2024-12-06 11:23:22.109028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.179 [2024-12-06 11:23:22.109031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.109035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45880) on tqpair=0x19e3550 00:24:16.179 [2024-12-06 11:23:22.109044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.109048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19e3550) 00:24:16.179 [2024-12-06 11:23:22.109055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.179 [2024-12-06 11:23:22.109066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45880, cid 5, qid 0 00:24:16.179 [2024-12-06 11:23:22.109272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.179 [2024-12-06 11:23:22.109278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.179 [2024-12-06 11:23:22.109282] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.109286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45880) on tqpair=0x19e3550 00:24:16.179 [2024-12-06 11:23:22.109295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.109299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19e3550) 00:24:16.179 [2024-12-06 11:23:22.109305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.179 [2024-12-06 11:23:22.109316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45880, cid 5, qid 0 00:24:16.179 [2024-12-06 11:23:22.109574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.179 [2024-12-06 11:23:22.109581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.179 [2024-12-06 11:23:22.109584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.109588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45880) on tqpair=0x19e3550 00:24:16.179 [2024-12-06 11:23:22.109604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.109608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19e3550) 00:24:16.179 [2024-12-06 11:23:22.109615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.179 [2024-12-06 11:23:22.109627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.109630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e3550) 00:24:16.179 [2024-12-06 11:23:22.109637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.179 [2024-12-06 11:23:22.109644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.109648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19e3550) 00:24:16.179 [2024-12-06 11:23:22.109656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.179 [2024-12-06 11:23:22.109663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.109667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19e3550) 00:24:16.179 [2024-12-06 11:23:22.109673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.179 [2024-12-06 11:23:22.109685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45880, cid 5, qid 0 00:24:16.179 [2024-12-06 11:23:22.109690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45700, cid 4, qid 0 00:24:16.179 [2024-12-06 11:23:22.109695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45a00, cid 6, qid 0 00:24:16.179 [2024-12-06 11:23:22.109700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45b80, cid 7, qid 0 00:24:16.179 [2024-12-06 11:23:22.109949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:16.179 [2024-12-06 11:23:22.109957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:16.179 [2024-12-06 11:23:22.109961] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.109967] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e3550): datao=0, datal=8192, cccid=5 00:24:16.179 [2024-12-06 11:23:22.109972] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a45880) on tqpair(0x19e3550): expected_datao=0, payload_size=8192 00:24:16.179 [2024-12-06 11:23:22.109976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110054] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110059] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:16.179 [2024-12-06 11:23:22.110076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:16.179 [2024-12-06 11:23:22.110080] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110083] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e3550): datao=0, datal=512, cccid=4 00:24:16.179 [2024-12-06 11:23:22.110088] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a45700) on tqpair(0x19e3550): expected_datao=0, payload_size=512 00:24:16.179 [2024-12-06 11:23:22.110092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110099] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110102] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:16.179 [2024-12-06 11:23:22.110113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:16.179 [2024-12-06 11:23:22.110117] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110120] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e3550): datao=0, datal=512, cccid=6 00:24:16.179 [2024-12-06 11:23:22.110125] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a45a00) on tqpair(0x19e3550): expected_datao=0, payload_size=512 00:24:16.179 [2024-12-06 11:23:22.110129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110135] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110139] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:16.179 [2024-12-06 11:23:22.110150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:16.179 [2024-12-06 11:23:22.110154] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110157] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e3550): datao=0, datal=4096, cccid=7 00:24:16.179 [2024-12-06 11:23:22.110164] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a45b80) on tqpair(0x19e3550): expected_datao=0, payload_size=4096 00:24:16.179 [2024-12-06 11:23:22.110168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110181] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.110185] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.151058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.179 [2024-12-06 11:23:22.151070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.179 [2024-12-06 11:23:22.151073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.151077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45880) on tqpair=0x19e3550 00:24:16.179 [2024-12-06 11:23:22.151092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.179 [2024-12-06 11:23:22.151098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.179 [2024-12-06 11:23:22.151104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.179 [2024-12-06 11:23:22.151108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45700) on tqpair=0x19e3550 00:24:16.180 [2024-12-06 11:23:22.151118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.180 [2024-12-06 11:23:22.151124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.180 [2024-12-06 11:23:22.151128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.180 [2024-12-06 11:23:22.151132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45a00) on tqpair=0x19e3550 00:24:16.180 [2024-12-06 11:23:22.151139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.180 [2024-12-06 11:23:22.151145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.180 [2024-12-06 11:23:22.151148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.180 [2024-12-06 11:23:22.151152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45b80) on tqpair=0x19e3550 00:24:16.180 ===================================================== 00:24:16.180 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:16.180 ===================================================== 00:24:16.180 Controller Capabilities/Features 00:24:16.180 ================================ 00:24:16.180 Vendor ID: 8086 00:24:16.180 Subsystem Vendor ID: 8086 00:24:16.180 Serial Number: SPDK00000000000001 00:24:16.180 Model Number: SPDK bdev Controller 00:24:16.180 Firmware Version: 25.01 00:24:16.180 Recommended Arb Burst: 6 00:24:16.180 IEEE OUI Identifier: e4 d2 5c 00:24:16.180 Multi-path I/O 00:24:16.180 May have multiple subsystem ports: Yes 00:24:16.180 May have multiple controllers: Yes 00:24:16.180 Associated with SR-IOV VF: No 00:24:16.180 Max Data Transfer Size: 131072 00:24:16.180 Max Number of Namespaces: 32 00:24:16.180 Max Number of I/O Queues: 127 00:24:16.180 NVMe Specification Version (VS): 1.3 00:24:16.180 NVMe Specification Version (Identify): 1.3 00:24:16.180 Maximum Queue Entries: 128 00:24:16.180 Contiguous Queues Required: Yes 00:24:16.180 Arbitration Mechanisms Supported 00:24:16.180 Weighted Round Robin: Not Supported 00:24:16.180 Vendor Specific: Not Supported 00:24:16.180 Reset Timeout: 15000 ms 00:24:16.180 Doorbell Stride: 4 bytes 00:24:16.180 NVM Subsystem Reset: Not Supported 00:24:16.180 Command Sets Supported 00:24:16.180 NVM Command Set: Supported 00:24:16.180 Boot Partition: Not Supported 00:24:16.180 Memory Page Size Minimum: 4096 bytes 00:24:16.180 Memory Page Size Maximum: 4096 bytes 00:24:16.180 Persistent Memory Region: Not Supported 00:24:16.180 Optional Asynchronous Events Supported 00:24:16.180 Namespace Attribute Notices: Supported 00:24:16.180 Firmware Activation Notices: Not Supported 00:24:16.180 ANA Change Notices: Not Supported 00:24:16.180 PLE Aggregate Log Change Notices: Not Supported 00:24:16.180 LBA Status Info Alert Notices: Not Supported 00:24:16.180 EGE Aggregate Log Change Notices: Not Supported 00:24:16.180 Normal NVM Subsystem Shutdown event: Not Supported 00:24:16.180 Zone Descriptor Change Notices: Not Supported 00:24:16.180 Discovery Log Change Notices: Not Supported 00:24:16.180 Controller Attributes 00:24:16.180 128-bit Host Identifier: Supported 00:24:16.180 Non-Operational Permissive Mode: Not Supported 00:24:16.180 NVM Sets: Not Supported 00:24:16.180 Read Recovery Levels: Not Supported 00:24:16.180 Endurance Groups: Not Supported 00:24:16.180 Predictable Latency Mode: Not Supported 00:24:16.180 Traffic Based Keep ALive: Not Supported 00:24:16.180 Namespace Granularity: Not Supported 00:24:16.180 SQ Associations: Not Supported 00:24:16.180 UUID List: Not Supported 00:24:16.180 Multi-Domain Subsystem: Not Supported 00:24:16.180 Fixed Capacity Management: Not Supported 00:24:16.180 Variable Capacity Management: Not Supported 00:24:16.180 Delete Endurance Group: Not Supported 00:24:16.180 Delete NVM Set: Not Supported 00:24:16.180 Extended LBA Formats Supported: Not Supported 00:24:16.180 Flexible Data Placement Supported: Not Supported 00:24:16.180 00:24:16.180 Controller Memory Buffer Support 00:24:16.180 ================================ 00:24:16.180 Supported: No 00:24:16.180 00:24:16.180 Persistent Memory Region Support 00:24:16.180 ================================ 00:24:16.180 Supported: No 00:24:16.180 00:24:16.180 Admin Command Set Attributes 00:24:16.180 ============================ 00:24:16.180 Security Send/Receive: Not Supported 00:24:16.180 Format NVM: Not Supported 00:24:16.180 Firmware Activate/Download: Not Supported 00:24:16.180 Namespace Management: Not Supported 00:24:16.180 Device Self-Test: Not Supported 00:24:16.180 Directives: Not Supported 00:24:16.180 NVMe-MI: Not Supported 00:24:16.180 Virtualization Management: Not Supported 00:24:16.180 Doorbell Buffer Config: Not Supported 00:24:16.180 Get LBA Status Capability: Not Supported 00:24:16.180 Command & Feature Lockdown Capability: Not Supported 00:24:16.180 Abort Command Limit: 4 00:24:16.180 Async Event Request Limit: 4 00:24:16.180 Number of Firmware Slots: N/A 00:24:16.180 Firmware Slot 1 Read-Only: N/A 00:24:16.180 Firmware Activation Without Reset: N/A 00:24:16.180 Multiple Update Detection Support: N/A 00:24:16.180 Firmware Update Granularity: No Information Provided 00:24:16.180 Per-Namespace SMART Log: No 00:24:16.180 Asymmetric Namespace Access Log Page: Not Supported 00:24:16.180 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:16.180 Command Effects Log Page: Supported 00:24:16.180 Get Log Page Extended Data: Supported 00:24:16.180 Telemetry Log Pages: Not Supported 00:24:16.180 Persistent Event Log Pages: Not Supported 00:24:16.180 Supported Log Pages Log Page: May Support 00:24:16.180 Commands Supported & Effects Log Page: Not Supported 00:24:16.180 Feature Identifiers & Effects Log Page:May Support 00:24:16.180 NVMe-MI Commands & Effects Log Page: May Support 00:24:16.180 Data Area 4 for Telemetry Log: Not Supported 00:24:16.180 Error Log Page Entries Supported: 128 00:24:16.180 Keep Alive: Supported 00:24:16.180 Keep Alive Granularity: 10000 ms 00:24:16.180 00:24:16.180 NVM Command Set Attributes 00:24:16.180 ========================== 00:24:16.180 Submission Queue Entry Size 00:24:16.180 Max: 64 00:24:16.180 Min: 64 00:24:16.180 Completion Queue Entry Size 00:24:16.180 Max: 16 00:24:16.180 Min: 16 00:24:16.180 Number of Namespaces: 32 00:24:16.180 Compare Command: Supported 00:24:16.180 Write Uncorrectable Command: Not Supported 00:24:16.180 Dataset Management Command: Supported 00:24:16.180 Write Zeroes Command: Supported 00:24:16.180 Set Features Save Field: Not Supported 00:24:16.180 Reservations: Supported 00:24:16.180 Timestamp: Not Supported 00:24:16.180 Copy: Supported 00:24:16.180 Volatile Write Cache: Present 00:24:16.180 Atomic Write Unit (Normal): 1 00:24:16.180 Atomic Write Unit (PFail): 1 00:24:16.180 Atomic Compare & Write Unit: 1 00:24:16.180 Fused Compare & Write: Supported 00:24:16.180 Scatter-Gather List 00:24:16.180 SGL Command Set: Supported 00:24:16.180 SGL Keyed: Supported 00:24:16.180 SGL Bit Bucket Descriptor: Not Supported 00:24:16.180 SGL Metadata Pointer: Not Supported 00:24:16.180 Oversized SGL: Not Supported 00:24:16.180 SGL Metadata Address: Not Supported 00:24:16.180 SGL Offset: Supported 00:24:16.180 Transport SGL Data Block: Not Supported 00:24:16.180 Replay Protected Memory Block: Not Supported 00:24:16.180 00:24:16.180 Firmware Slot Information 00:24:16.180 ========================= 00:24:16.180 Active slot: 1 00:24:16.180 Slot 1 Firmware Revision: 25.01 00:24:16.180 00:24:16.180 00:24:16.180 Commands Supported and Effects 00:24:16.180 ============================== 00:24:16.180 Admin Commands 00:24:16.180 -------------- 00:24:16.180 Get Log Page (02h): Supported 00:24:16.180 Identify (06h): Supported 00:24:16.180 Abort (08h): Supported 00:24:16.180 Set Features (09h): Supported 00:24:16.180 Get Features (0Ah): Supported 00:24:16.180 Asynchronous Event Request (0Ch): Supported 00:24:16.180 Keep Alive (18h): Supported 00:24:16.180 I/O Commands 00:24:16.180 ------------ 00:24:16.180 Flush (00h): Supported LBA-Change 00:24:16.180 Write (01h): Supported LBA-Change 00:24:16.180 Read (02h): Supported 00:24:16.180 Compare (05h): Supported 00:24:16.180 Write Zeroes (08h): Supported LBA-Change 00:24:16.180 Dataset Management (09h): Supported LBA-Change 00:24:16.180 Copy (19h): Supported LBA-Change 00:24:16.180 00:24:16.180 Error Log 00:24:16.180 ========= 00:24:16.180 00:24:16.180 Arbitration 00:24:16.180 =========== 00:24:16.180 Arbitration Burst: 1 00:24:16.180 00:24:16.180 Power Management 00:24:16.180 ================ 00:24:16.180 Number of Power States: 1 00:24:16.180 Current Power State: Power State #0 00:24:16.180 Power State #0: 00:24:16.180 Max Power: 0.00 W 00:24:16.180 Non-Operational State: Operational 00:24:16.180 Entry Latency: Not Reported 00:24:16.180 Exit Latency: Not Reported 00:24:16.180 Relative Read Throughput: 0 00:24:16.180 Relative Read Latency: 0 00:24:16.180 Relative Write Throughput: 0 00:24:16.180 Relative Write Latency: 0 00:24:16.180 Idle Power: Not Reported 00:24:16.180 Active Power: Not Reported 00:24:16.180 Non-Operational Permissive Mode: Not Supported 00:24:16.180 00:24:16.180 Health Information 00:24:16.180 ================== 00:24:16.180 Critical Warnings: 00:24:16.180 Available Spare Space: OK 00:24:16.180 Temperature: OK 00:24:16.180 Device Reliability: OK 00:24:16.180 Read Only: No 00:24:16.180 Volatile Memory Backup: OK 00:24:16.180 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:16.180 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:16.180 Available Spare: 0% 00:24:16.180 Available Spare Threshold: 0% 00:24:16.180 Life Percentage Used:[2024-12-06 11:23:22.151250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.180 [2024-12-06 11:23:22.151256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19e3550) 00:24:16.180 [2024-12-06 11:23:22.151264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.180 [2024-12-06 11:23:22.151277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45b80, cid 7, qid 0 00:24:16.180 [2024-12-06 11:23:22.151502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.180 [2024-12-06 11:23:22.151510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.180 [2024-12-06 11:23:22.151513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.180 [2024-12-06 11:23:22.151517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45b80) on tqpair=0x19e3550 00:24:16.180 [2024-12-06 11:23:22.151555] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:16.180 [2024-12-06 11:23:22.151565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45100) on tqpair=0x19e3550 00:24:16.180 [2024-12-06 11:23:22.151572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.180 [2024-12-06 11:23:22.151577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45280) on tqpair=0x19e3550 00:24:16.180 [2024-12-06 11:23:22.151582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.180 [2024-12-06 11:23:22.151587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45400) on tqpair=0x19e3550 00:24:16.180 [2024-12-06 11:23:22.151594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.180 [2024-12-06 11:23:22.151601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.151607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.181 [2024-12-06 11:23:22.151616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.151620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.151623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.151630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.151642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.151823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.151830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.151834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.151838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.151844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.151848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.151852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.155866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.155885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.156088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.156095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.156099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.156108] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:16.181 [2024-12-06 11:23:22.156113] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:16.181 [2024-12-06 11:23:22.156122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.156141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.156152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.156340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.156347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.156350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.156364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.156378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.156391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.156592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.156599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.156602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.156616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.156630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.156643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.156787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.156794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.156798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.156811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.156819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.156825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.156836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.157050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.157057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.157061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.157074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.157089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.157103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.157298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.157304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.157308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.157321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.157335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.157349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.157566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.157576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.157580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.157593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.157607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.157621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.157825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.157834] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.157838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.157851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.157859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.157873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.157887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.158079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.158085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.158089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.158102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.158116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.158129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.158351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.158358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.158362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.158375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.158390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.158403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.158614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.158621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.158627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.158640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.158654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.158666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.158868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.158875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.158879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.158892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.158900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.158907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.158920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.159133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.159140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.159144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.159147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.159157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.159161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.159164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.159171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.159183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.159403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.159410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.159413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.159417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.159427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.159431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.159434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.159441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.159453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.159635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.159642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.159645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.159651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.159661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.159665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.159668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.181 [2024-12-06 11:23:22.159675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.181 [2024-12-06 11:23:22.159687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.181 [2024-12-06 11:23:22.163871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.181 [2024-12-06 11:23:22.163879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.181 [2024-12-06 11:23:22.163883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.181 [2024-12-06 11:23:22.163887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.181 [2024-12-06 11:23:22.163898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:16.182 [2024-12-06 11:23:22.163903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:16.182 [2024-12-06 11:23:22.163907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e3550) 00:24:16.182 [2024-12-06 11:23:22.163914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.182 [2024-12-06 11:23:22.163926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a45580, cid 3, qid 0 00:24:16.182 [2024-12-06 11:23:22.164087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:16.182 [2024-12-06 11:23:22.164093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:16.182 [2024-12-06 11:23:22.164097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:16.182 [2024-12-06 11:23:22.164101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a45580) on tqpair=0x19e3550 00:24:16.182 [2024-12-06 11:23:22.164109] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:24:16.182 0% 00:24:16.182 Data Units Read: 0 00:24:16.182 Data Units Written: 0 00:24:16.182 Host Read Commands: 0 00:24:16.182 Host Write Commands: 0 00:24:16.182 Controller Busy Time: 0 minutes 00:24:16.182 Power Cycles: 0 00:24:16.182 Power On Hours: 0 hours 00:24:16.182 Unsafe Shutdowns: 0 00:24:16.182 Unrecoverable Media Errors: 0 00:24:16.182 Lifetime Error Log Entries: 0 00:24:16.182 Warning Temperature Time: 0 minutes 00:24:16.182 Critical Temperature Time: 0 minutes 00:24:16.182 00:24:16.182 Number of Queues 00:24:16.182 ================ 00:24:16.182 Number of I/O Submission Queues: 127 00:24:16.182 Number of I/O Completion Queues: 127 00:24:16.182 00:24:16.182 Active Namespaces 00:24:16.182 ================= 00:24:16.182 Namespace ID:1 00:24:16.182 Error Recovery Timeout: Unlimited 00:24:16.182 Command Set Identifier: NVM (00h) 00:24:16.182 Deallocate: Supported 00:24:16.182 Deallocated/Unwritten Error: Not Supported 00:24:16.182 Deallocated Read Value: Unknown 00:24:16.182 Deallocate in Write Zeroes: Not Supported 00:24:16.182 Deallocated Guard Field: 0xFFFF 00:24:16.182 Flush: Supported 00:24:16.182 Reservation: Supported 00:24:16.182 Namespace Sharing Capabilities: Multiple Controllers 00:24:16.182 Size (in LBAs): 131072 (0GiB) 00:24:16.182 Capacity (in LBAs): 131072 (0GiB) 00:24:16.182 Utilization (in LBAs): 131072 (0GiB) 00:24:16.182 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:16.182 EUI64: ABCDEF0123456789 00:24:16.182 UUID: 22eca44b-07b3-48ac-871a-56b6beae2a78 00:24:16.182 Thin Provisioning: Not Supported 00:24:16.182 Per-NS Atomic Units: Yes 00:24:16.182 Atomic Boundary Size (Normal): 0 00:24:16.182 Atomic Boundary Size (PFail): 0 00:24:16.182 Atomic Boundary Offset: 0 00:24:16.182 Maximum Single Source Range Length: 65535 00:24:16.182 Maximum Copy Length: 65535 00:24:16.182 Maximum Source Range Count: 1 00:24:16.182 NGUID/EUI64 Never Reused: No 00:24:16.182 Namespace Write Protected: No 00:24:16.182 Number of LBA Formats: 1 00:24:16.182 Current LBA Format: LBA Format #00 00:24:16.182 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:16.182 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.182 rmmod nvme_tcp 00:24:16.182 rmmod nvme_fabrics 00:24:16.182 rmmod nvme_keyring 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3526174 ']' 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3526174 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3526174 ']' 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3526174 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3526174 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3526174' 00:24:16.182 killing process with pid 3526174 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3526174 00:24:16.182 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3526174 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.443 11:23:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.991 00:24:18.991 real 0m12.470s 00:24:18.991 user 0m8.918s 00:24:18.991 sys 0m6.659s 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:18.991 ************************************ 00:24:18.991 END TEST nvmf_identify 00:24:18.991 ************************************ 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.991 ************************************ 00:24:18.991 START TEST nvmf_perf 00:24:18.991 ************************************ 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:18.991 * Looking for test storage... 00:24:18.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:18.991 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:18.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.992 --rc genhtml_branch_coverage=1 00:24:18.992 --rc genhtml_function_coverage=1 00:24:18.992 --rc genhtml_legend=1 00:24:18.992 --rc geninfo_all_blocks=1 00:24:18.992 --rc geninfo_unexecuted_blocks=1 00:24:18.992 00:24:18.992 ' 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:18.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.992 --rc genhtml_branch_coverage=1 00:24:18.992 --rc genhtml_function_coverage=1 00:24:18.992 --rc genhtml_legend=1 00:24:18.992 --rc geninfo_all_blocks=1 00:24:18.992 --rc geninfo_unexecuted_blocks=1 00:24:18.992 00:24:18.992 ' 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:18.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.992 --rc genhtml_branch_coverage=1 00:24:18.992 --rc genhtml_function_coverage=1 00:24:18.992 --rc genhtml_legend=1 00:24:18.992 --rc geninfo_all_blocks=1 00:24:18.992 --rc geninfo_unexecuted_blocks=1 00:24:18.992 00:24:18.992 ' 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:18.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.992 --rc genhtml_branch_coverage=1 00:24:18.992 --rc genhtml_function_coverage=1 00:24:18.992 --rc genhtml_legend=1 00:24:18.992 --rc geninfo_all_blocks=1 00:24:18.992 --rc geninfo_unexecuted_blocks=1 00:24:18.992 00:24:18.992 ' 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.992 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:27.135 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:27.135 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:27.135 Found net devices under 0000:31:00.0: cvl_0_0 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:27.135 Found net devices under 0000:31:00.1: cvl_0_1 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.135 11:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.135 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.135 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.135 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:24:27.136 00:24:27.136 --- 10.0.0.2 ping statistics --- 00:24:27.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.136 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:24:27.136 00:24:27.136 --- 10.0.0.1 ping statistics --- 00:24:27.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.136 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3531210 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3531210 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3531210 ']' 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.136 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:27.136 [2024-12-06 11:23:33.196149] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:24:27.136 [2024-12-06 11:23:33.196198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.136 [2024-12-06 11:23:33.285823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.396 [2024-12-06 11:23:33.321873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.396 [2024-12-06 11:23:33.321906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.396 [2024-12-06 11:23:33.321928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.396 [2024-12-06 11:23:33.321935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.396 [2024-12-06 11:23:33.321940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.396 [2024-12-06 11:23:33.323373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.396 [2024-12-06 11:23:33.323481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.396 [2024-12-06 11:23:33.323634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.396 [2024-12-06 11:23:33.323634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.396 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.396 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:27.396 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.396 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.396 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:27.396 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.396 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:27.396 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:27.967 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:27.967 11:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:27.967 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:28.227 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:28.227 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:28.227 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:28.227 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:28.227 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:28.227 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:28.487 [2024-12-06 11:23:34.501087] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.487 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.747 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:28.747 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:28.747 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:28.747 11:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:29.006 11:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.266 [2024-12-06 11:23:35.239810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.266 11:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:29.526 11:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:29.526 11:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:29.526 11:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:29.526 11:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:30.909 Initializing NVMe Controllers 00:24:30.909 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:30.909 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:30.909 Initialization complete. Launching workers. 00:24:30.909 ======================================================== 00:24:30.909 Latency(us) 00:24:30.909 Device Information : IOPS MiB/s Average min max 00:24:30.909 PCIE (0000:65:00.0) NSID 1 from core 0: 78965.16 308.46 404.55 13.33 7200.11 00:24:30.909 ======================================================== 00:24:30.909 Total : 78965.16 308.46 404.55 13.33 7200.11 00:24:30.909 00:24:30.909 11:23:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:31.855 Initializing NVMe Controllers 00:24:31.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:31.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:31.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:31.855 Initialization complete. Launching workers. 00:24:31.855 ======================================================== 00:24:31.855 Latency(us) 00:24:31.855 Device Information : IOPS MiB/s Average min max 00:24:31.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.00 0.18 23034.36 248.07 45892.67 00:24:31.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 86.00 0.34 11681.33 7418.44 47892.84 00:24:31.855 ======================================================== 00:24:31.855 Total : 131.00 0.51 15581.23 248.07 47892.84 00:24:31.855 00:24:32.116 11:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:33.503 Initializing NVMe Controllers 00:24:33.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:33.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:33.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:33.503 Initialization complete. Launching workers. 00:24:33.503 ======================================================== 00:24:33.503 Latency(us) 00:24:33.503 Device Information : IOPS MiB/s Average min max 00:24:33.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10410.99 40.67 3073.96 500.61 6621.72 00:24:33.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3833.00 14.97 8394.30 6134.84 15878.20 00:24:33.503 ======================================================== 00:24:33.503 Total : 14243.99 55.64 4505.64 500.61 15878.20 00:24:33.503 00:24:33.503 11:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:33.503 11:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:33.503 11:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:36.052 Initializing NVMe Controllers 00:24:36.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:36.052 Controller IO queue size 128, less than required. 00:24:36.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:36.052 Controller IO queue size 128, less than required. 00:24:36.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:36.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:36.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:36.052 Initialization complete. Launching workers. 00:24:36.052 ======================================================== 00:24:36.052 Latency(us) 00:24:36.052 Device Information : IOPS MiB/s Average min max 00:24:36.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1647.21 411.80 78453.48 50198.39 110422.31 00:24:36.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 594.40 148.60 226085.92 78211.17 322173.67 00:24:36.052 ======================================================== 00:24:36.052 Total : 2241.61 560.40 117600.44 50198.39 322173.67 00:24:36.052 00:24:36.052 11:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:36.052 No valid NVMe controllers or AIO or URING devices found 00:24:36.052 Initializing NVMe Controllers 00:24:36.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:36.052 Controller IO queue size 128, less than required. 00:24:36.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:36.052 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:36.052 Controller IO queue size 128, less than required. 00:24:36.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:36.052 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:36.052 WARNING: Some requested NVMe devices were skipped 00:24:36.052 11:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:38.601 Initializing NVMe Controllers 00:24:38.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.601 Controller IO queue size 128, less than required. 00:24:38.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.601 Controller IO queue size 128, less than required. 00:24:38.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:38.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:38.601 Initialization complete. Launching workers. 00:24:38.601 00:24:38.601 ==================== 00:24:38.601 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:38.601 TCP transport: 00:24:38.601 polls: 18302 00:24:38.601 idle_polls: 9629 00:24:38.601 sock_completions: 8673 00:24:38.601 nvme_completions: 6463 00:24:38.601 submitted_requests: 9630 00:24:38.601 queued_requests: 1 00:24:38.601 00:24:38.601 ==================== 00:24:38.601 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:38.601 TCP transport: 00:24:38.601 polls: 22144 00:24:38.601 idle_polls: 13329 00:24:38.601 sock_completions: 8815 00:24:38.601 nvme_completions: 6741 00:24:38.601 submitted_requests: 10240 00:24:38.601 queued_requests: 1 00:24:38.601 ======================================================== 00:24:38.601 Latency(us) 00:24:38.601 Device Information : IOPS MiB/s Average min max 00:24:38.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1614.49 403.62 80334.69 38324.13 128185.24 00:24:38.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1683.94 420.99 76998.75 38049.48 127012.97 00:24:38.601 ======================================================== 00:24:38.601 Total : 3298.43 824.61 78631.60 38049.48 128185.24 00:24:38.601 00:24:38.601 11:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:38.601 11:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.863 11:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:38.863 11:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:38.863 11:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:38.863 11:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.863 11:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:38.863 11:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.863 11:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:38.863 11:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.863 11:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.863 rmmod nvme_tcp 00:24:38.863 rmmod nvme_fabrics 00:24:38.863 rmmod nvme_keyring 00:24:38.863 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.863 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:38.863 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:38.863 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3531210 ']' 00:24:38.864 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3531210 00:24:38.864 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3531210 ']' 00:24:38.864 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3531210 00:24:38.864 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:38.864 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.864 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3531210 00:24:39.123 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.123 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.123 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3531210' 00:24:39.123 killing process with pid 3531210 00:24:39.123 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3531210 00:24:39.123 11:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3531210 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.034 11:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:43.654 00:24:43.654 real 0m24.509s 00:24:43.654 user 0m56.611s 00:24:43.654 sys 0m8.932s 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:43.654 ************************************ 00:24:43.654 END TEST nvmf_perf 00:24:43.654 ************************************ 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.654 ************************************ 00:24:43.654 START TEST nvmf_fio_host 00:24:43.654 ************************************ 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:43.654 * Looking for test storage... 00:24:43.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:43.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.654 --rc genhtml_branch_coverage=1 00:24:43.654 --rc genhtml_function_coverage=1 00:24:43.654 --rc genhtml_legend=1 00:24:43.654 --rc geninfo_all_blocks=1 00:24:43.654 --rc geninfo_unexecuted_blocks=1 00:24:43.654 00:24:43.654 ' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:43.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.654 --rc genhtml_branch_coverage=1 00:24:43.654 --rc genhtml_function_coverage=1 00:24:43.654 --rc genhtml_legend=1 00:24:43.654 --rc geninfo_all_blocks=1 00:24:43.654 --rc geninfo_unexecuted_blocks=1 00:24:43.654 00:24:43.654 ' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:43.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.654 --rc genhtml_branch_coverage=1 00:24:43.654 --rc genhtml_function_coverage=1 00:24:43.654 --rc genhtml_legend=1 00:24:43.654 --rc geninfo_all_blocks=1 00:24:43.654 --rc geninfo_unexecuted_blocks=1 00:24:43.654 00:24:43.654 ' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:43.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.654 --rc genhtml_branch_coverage=1 00:24:43.654 --rc genhtml_function_coverage=1 00:24:43.654 --rc genhtml_legend=1 00:24:43.654 --rc geninfo_all_blocks=1 00:24:43.654 --rc geninfo_unexecuted_blocks=1 00:24:43.654 00:24:43.654 ' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:43.654 11:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:51.813 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.813 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:51.814 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:51.814 Found net devices under 0000:31:00.0: cvl_0_0 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:51.814 Found net devices under 0000:31:00.1: cvl_0_1 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:51.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:24:51.814 00:24:51.814 --- 10.0.0.2 ping statistics --- 00:24:51.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.814 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:24:51.814 00:24:51.814 --- 10.0.0.1 ping statistics --- 00:24:51.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.814 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.814 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:52.074 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:52.074 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:52.074 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:52.074 11:23:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.075 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3538639 00:24:52.075 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:52.075 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:52.075 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3538639 00:24:52.075 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3538639 ']' 00:24:52.075 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.075 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.075 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.075 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.075 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.075 [2024-12-06 11:23:58.058748] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:24:52.075 [2024-12-06 11:23:58.058812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.075 [2024-12-06 11:23:58.150584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:52.075 [2024-12-06 11:23:58.191702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.075 [2024-12-06 11:23:58.191738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.075 [2024-12-06 11:23:58.191746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.075 [2024-12-06 11:23:58.191753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.075 [2024-12-06 11:23:58.191759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.075 [2024-12-06 11:23:58.193397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.075 [2024-12-06 11:23:58.193514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.075 [2024-12-06 11:23:58.193671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.075 [2024-12-06 11:23:58.193672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.011 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.012 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:53.012 11:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:53.012 [2024-12-06 11:23:59.009974] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.012 11:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:53.012 11:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:53.012 11:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.012 11:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:53.271 Malloc1 00:24:53.271 11:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:53.531 11:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:53.531 11:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.791 [2024-12-06 11:23:59.800562] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.791 11:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:54.051 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:54.052 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:54.052 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:54.052 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:54.052 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:54.052 11:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:54.312 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:54.312 fio-3.35 00:24:54.312 Starting 1 thread 00:24:56.856 00:24:56.856 test: (groupid=0, jobs=1): err= 0: pid=3539513: Fri Dec 6 11:24:02 2024 00:24:56.856 read: IOPS=13.8k, BW=54.1MiB/s (56.7MB/s)(108MiB/2004msec) 00:24:56.856 slat (usec): min=2, max=287, avg= 2.17, stdev= 2.45 00:24:56.856 clat (usec): min=3417, max=8854, avg=5070.85, stdev=376.36 00:24:56.856 lat (usec): min=3420, max=8867, avg=5073.01, stdev=376.59 00:24:56.856 clat percentiles (usec): 00:24:56.856 | 1.00th=[ 4228], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4817], 00:24:56.856 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:24:56.856 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:24:56.856 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 8160], 99.95th=[ 8717], 00:24:56.857 | 99.99th=[ 8848] 00:24:56.857 bw ( KiB/s): min=53944, max=55880, per=99.98%, avg=55390.00, stdev=964.02, samples=4 00:24:56.857 iops : min=13486, max=13970, avg=13847.50, stdev=241.01, samples=4 00:24:56.857 write: IOPS=13.9k, BW=54.1MiB/s (56.8MB/s)(108MiB/2004msec); 0 zone resets 00:24:56.857 slat (usec): min=2, max=285, avg= 2.24, stdev= 1.87 00:24:56.857 clat (usec): min=2618, max=8171, avg=4116.23, stdev=331.66 00:24:56.857 lat (usec): min=2620, max=8173, avg=4118.47, stdev=331.94 00:24:56.857 clat percentiles (usec): 00:24:56.857 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:24:56.857 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:24:56.857 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:24:56.857 | 99.00th=[ 4817], 99.50th=[ 5800], 99.90th=[ 7242], 99.95th=[ 7504], 00:24:56.857 | 99.99th=[ 8029] 00:24:56.857 bw ( KiB/s): min=54296, max=55936, per=99.96%, avg=55398.00, stdev=746.34, samples=4 00:24:56.857 iops : min=13574, max=13984, avg=13849.50, stdev=186.59, samples=4 00:24:56.857 lat (msec) : 4=17.17%, 10=82.83% 00:24:56.857 cpu : usr=72.44%, sys=26.16%, ctx=36, majf=0, minf=16 00:24:56.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:56.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:56.857 issued rwts: total=27755,27766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:56.857 00:24:56.857 Run status group 0 (all jobs): 00:24:56.857 READ: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=108MiB (114MB), run=2004-2004msec 00:24:56.857 WRITE: bw=54.1MiB/s (56.8MB/s), 54.1MiB/s-54.1MiB/s (56.8MB/s-56.8MB/s), io=108MiB (114MB), run=2004-2004msec 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:56.857 11:24:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:57.117 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:57.117 fio-3.35 00:24:57.117 Starting 1 thread 00:24:59.660 00:24:59.660 test: (groupid=0, jobs=1): err= 0: pid=3540178: Fri Dec 6 11:24:05 2024 00:24:59.660 read: IOPS=9581, BW=150MiB/s (157MB/s)(301MiB/2008msec) 00:24:59.660 slat (usec): min=3, max=110, avg= 3.68, stdev= 1.64 00:24:59.660 clat (usec): min=1298, max=24659, avg=7950.84, stdev=2265.37 00:24:59.660 lat (usec): min=1302, max=24669, avg=7954.51, stdev=2265.65 00:24:59.660 clat percentiles (usec): 00:24:59.660 | 1.00th=[ 4080], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 6063], 00:24:59.660 | 30.00th=[ 6587], 40.00th=[ 7177], 50.00th=[ 7635], 60.00th=[ 8225], 00:24:59.660 | 70.00th=[ 8979], 80.00th=[10028], 90.00th=[10814], 95.00th=[11338], 00:24:59.660 | 99.00th=[13435], 99.50th=[14615], 99.90th=[23987], 99.95th=[24249], 00:24:59.660 | 99.99th=[24511] 00:24:59.660 bw ( KiB/s): min=66720, max=88032, per=50.27%, avg=77072.00, stdev=10491.76, samples=4 00:24:59.660 iops : min= 4170, max= 5502, avg=4817.00, stdev=655.73, samples=4 00:24:59.660 write: IOPS=5657, BW=88.4MiB/s (92.7MB/s)(157MiB/1777msec); 0 zone resets 00:24:59.660 slat (usec): min=39, max=331, avg=41.45, stdev= 9.20 00:24:59.660 clat (usec): min=1818, max=25134, avg=9397.79, stdev=1884.82 00:24:59.660 lat (usec): min=1858, max=25258, avg=9439.25, stdev=1888.65 00:24:59.660 clat percentiles (usec): 00:24:59.660 | 1.00th=[ 5997], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 8029], 00:24:59.660 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:24:59.660 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11600], 95.00th=[12518], 00:24:59.660 | 99.00th=[15139], 99.50th=[16188], 99.90th=[24249], 99.95th=[24773], 00:24:59.660 | 99.99th=[25035] 00:24:59.660 bw ( KiB/s): min=70464, max=91200, per=88.49%, avg=80104.00, stdev=9856.19, samples=4 00:24:59.660 iops : min= 4404, max= 5700, avg=5006.50, stdev=616.01, samples=4 00:24:59.660 lat (msec) : 2=0.06%, 4=0.65%, 10=75.56%, 20=23.40%, 50=0.33% 00:24:59.660 cpu : usr=88.39%, sys=10.71%, ctx=14, majf=0, minf=38 00:24:59.660 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:59.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:59.660 issued rwts: total=19240,10054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.660 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:59.660 00:24:59.660 Run status group 0 (all jobs): 00:24:59.660 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=301MiB (315MB), run=2008-2008msec 00:24:59.660 WRITE: bw=88.4MiB/s (92.7MB/s), 88.4MiB/s-88.4MiB/s (92.7MB/s-92.7MB/s), io=157MiB (165MB), run=1777-1777msec 00:24:59.660 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.660 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:59.660 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:59.660 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:59.660 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:59.660 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.660 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:59.660 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.660 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:59.660 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.660 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.660 rmmod nvme_tcp 00:24:59.920 rmmod nvme_fabrics 00:24:59.920 rmmod nvme_keyring 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3538639 ']' 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3538639 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3538639 ']' 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3538639 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3538639 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3538639' 00:24:59.920 killing process with pid 3538639 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3538639 00:24:59.920 11:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3538639 00:24:59.920 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.920 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.920 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.920 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:59.920 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:59.920 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.920 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:00.179 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.179 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.179 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.179 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.179 11:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.088 11:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:02.088 00:25:02.088 real 0m18.937s 00:25:02.088 user 1m7.139s 00:25:02.088 sys 0m8.414s 00:25:02.088 11:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.088 11:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.088 ************************************ 00:25:02.088 END TEST nvmf_fio_host 00:25:02.088 ************************************ 00:25:02.088 11:24:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:02.088 11:24:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:02.088 11:24:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.088 11:24:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.088 ************************************ 00:25:02.088 START TEST nvmf_failover 00:25:02.088 ************************************ 00:25:02.088 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:02.349 * Looking for test storage... 00:25:02.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:02.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.349 --rc genhtml_branch_coverage=1 00:25:02.349 --rc genhtml_function_coverage=1 00:25:02.349 --rc genhtml_legend=1 00:25:02.349 --rc geninfo_all_blocks=1 00:25:02.349 --rc geninfo_unexecuted_blocks=1 00:25:02.349 00:25:02.349 ' 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:02.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.349 --rc genhtml_branch_coverage=1 00:25:02.349 --rc genhtml_function_coverage=1 00:25:02.349 --rc genhtml_legend=1 00:25:02.349 --rc geninfo_all_blocks=1 00:25:02.349 --rc geninfo_unexecuted_blocks=1 00:25:02.349 00:25:02.349 ' 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:02.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.349 --rc genhtml_branch_coverage=1 00:25:02.349 --rc genhtml_function_coverage=1 00:25:02.349 --rc genhtml_legend=1 00:25:02.349 --rc geninfo_all_blocks=1 00:25:02.349 --rc geninfo_unexecuted_blocks=1 00:25:02.349 00:25:02.349 ' 00:25:02.349 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:02.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.349 --rc genhtml_branch_coverage=1 00:25:02.349 --rc genhtml_function_coverage=1 00:25:02.350 --rc genhtml_legend=1 00:25:02.350 --rc geninfo_all_blocks=1 00:25:02.350 --rc geninfo_unexecuted_blocks=1 00:25:02.350 00:25:02.350 ' 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.350 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:10.497 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:10.497 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.497 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:10.498 Found net devices under 0000:31:00.0: cvl_0_0 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:10.498 Found net devices under 0000:31:00.1: cvl_0_1 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.498 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:10.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:25:10.759 00:25:10.759 --- 10.0.0.2 ping statistics --- 00:25:10.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.759 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:25:10.759 00:25:10.759 --- 10.0.0.1 ping statistics --- 00:25:10.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.759 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3545879 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3545879 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3545879 ']' 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.759 11:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.021 [2024-12-06 11:24:16.926347] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:25:11.021 [2024-12-06 11:24:16.926416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.021 [2024-12-06 11:24:17.034248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:11.021 [2024-12-06 11:24:17.084717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.021 [2024-12-06 11:24:17.084773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.021 [2024-12-06 11:24:17.084782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.021 [2024-12-06 11:24:17.084789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.021 [2024-12-06 11:24:17.084795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.021 [2024-12-06 11:24:17.086667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.021 [2024-12-06 11:24:17.086833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.021 [2024-12-06 11:24:17.086834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.594 11:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.594 11:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:11.594 11:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:11.594 11:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.594 11:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.855 11:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.855 11:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:11.855 [2024-12-06 11:24:17.936295] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.855 11:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:12.117 Malloc0 00:25:12.117 11:24:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.378 11:24:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:12.378 11:24:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.640 [2024-12-06 11:24:18.698033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.640 11:24:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:12.901 [2024-12-06 11:24:18.874526] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:12.902 11:24:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:13.162 [2024-12-06 11:24:19.091213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:13.162 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3546248 00:25:13.162 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:13.162 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.162 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3546248 /var/tmp/bdevperf.sock 00:25:13.162 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3546248 ']' 00:25:13.162 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.162 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.162 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.162 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.162 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.102 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.102 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:14.102 11:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:14.102 NVMe0n1 00:25:14.102 11:24:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:14.362 00:25:14.622 11:24:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:14.622 11:24:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3546578 00:25:14.622 11:24:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:15.561 11:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.561 [2024-12-06 11:24:21.707721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfdf80 is same with the state(6) to be set 00:25:15.561 [2024-12-06 11:24:21.707763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfdf80 is same with the state(6) to be set 00:25:15.561 [2024-12-06 11:24:21.707770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfdf80 is same with the state(6) to be set 00:25:15.561 [2024-12-06 11:24:21.707775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfdf80 is same with the state(6) to be set 00:25:15.561 [2024-12-06 11:24:21.707785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfdf80 is same with the state(6) to be set 00:25:15.561 [2024-12-06 11:24:21.707789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfdf80 is same with the state(6) to be set 00:25:15.561 [2024-12-06 11:24:21.707794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfdf80 is same with the state(6) to be set 00:25:15.821 11:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:19.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:19.125 00:25:19.125 11:24:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:19.125 [2024-12-06 11:24:25.185452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 [2024-12-06 11:24:25.185855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfec80 is same with the state(6) to be set 00:25:19.125 11:24:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:22.423 11:24:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.423 [2024-12-06 11:24:28.379740] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.423 11:24:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:23.366 11:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:23.628 [2024-12-06 11:24:29.567709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.567998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.568002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.568007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.568011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.568016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.568020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.568025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.568029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.568034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 [2024-12-06 11:24:29.568039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4a370 is same with the state(6) to be set 00:25:23.628 11:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3546578 00:25:30.222 { 00:25:30.222 "results": [ 00:25:30.222 { 00:25:30.222 "job": "NVMe0n1", 00:25:30.222 "core_mask": "0x1", 00:25:30.222 "workload": "verify", 00:25:30.222 "status": "finished", 00:25:30.222 "verify_range": { 00:25:30.222 "start": 0, 00:25:30.222 "length": 16384 00:25:30.222 }, 00:25:30.222 "queue_depth": 128, 00:25:30.222 "io_size": 4096, 00:25:30.222 "runtime": 15.002489, 00:25:30.222 "iops": 11238.20187436898, 00:25:30.222 "mibps": 43.89922607175383, 00:25:30.222 "io_failed": 5661, 00:25:30.222 "io_timeout": 0, 00:25:30.222 "avg_latency_us": 10991.923562757991, 00:25:30.222 "min_latency_us": 774.8266666666667, 00:25:30.222 "max_latency_us": 15728.64 00:25:30.222 } 00:25:30.222 ], 00:25:30.222 "core_count": 1 00:25:30.222 } 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3546248 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3546248 ']' 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3546248 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3546248 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3546248' 00:25:30.222 killing process with pid 3546248 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3546248 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3546248 00:25:30.222 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:30.222 [2024-12-06 11:24:19.170401] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:25:30.222 [2024-12-06 11:24:19.170461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3546248 ] 00:25:30.222 [2024-12-06 11:24:19.248707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.222 [2024-12-06 11:24:19.285338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.222 Running I/O for 15 seconds... 00:25:30.222 11286.00 IOPS, 44.09 MiB/s [2024-12-06T10:24:36.389Z] [2024-12-06 11:24:21.707953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.707986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.222 [2024-12-06 11:24:21.708195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.222 [2024-12-06 11:24:21.708202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.223 [2024-12-06 11:24:21.708218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.223 [2024-12-06 11:24:21.708236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.223 [2024-12-06 11:24:21.708253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.223 [2024-12-06 11:24:21.708269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.223 [2024-12-06 11:24:21.708286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.223 [2024-12-06 11:24:21.708302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.223 [2024-12-06 11:24:21.708319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.223 [2024-12-06 11:24:21.708336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.223 [2024-12-06 11:24:21.708624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.223 [2024-12-06 11:24:21.708843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.223 [2024-12-06 11:24:21.708853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.223 [2024-12-06 11:24:21.708860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.708874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.224 [2024-12-06 11:24:21.708882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.708891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.224 [2024-12-06 11:24:21.708898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.708908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.224 [2024-12-06 11:24:21.708915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.708924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.224 [2024-12-06 11:24:21.708931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.708941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.708948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.708958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.708965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.708975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.708983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.708993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.224 [2024-12-06 11:24:21.709344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.224 [2024-12-06 11:24:21.709530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.224 [2024-12-06 11:24:21.709540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.709984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.709991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.710008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.710025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.710042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.710059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.710075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.710092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.710110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.710128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.225 [2024-12-06 11:24:21.710144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabe560 is same with the state(6) to be set 00:25:30.225 [2024-12-06 11:24:21.710166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.225 [2024-12-06 11:24:21.710172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.225 [2024-12-06 11:24:21.710180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:25:30.225 [2024-12-06 11:24:21.710187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710228] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:30.225 [2024-12-06 11:24:21.710250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.225 [2024-12-06 11:24:21.710258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.225 [2024-12-06 11:24:21.710267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.226 [2024-12-06 11:24:21.710274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:21.710282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.226 [2024-12-06 11:24:21.710290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:21.710298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.226 [2024-12-06 11:24:21.710305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:21.710313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:30.226 [2024-12-06 11:24:21.713946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:30.226 [2024-12-06 11:24:21.713970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9d930 (9): Bad file descriptor 00:25:30.226 [2024-12-06 11:24:21.748881] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:30.226 10984.00 IOPS, 42.91 MiB/s [2024-12-06T10:24:36.393Z] 11039.33 IOPS, 43.12 MiB/s [2024-12-06T10:24:36.393Z] 11108.75 IOPS, 43.39 MiB/s [2024-12-06T10:24:36.393Z] [2024-12-06 11:24:25.185938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.185977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.185993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.226 [2024-12-06 11:24:25.186427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.226 [2024-12-06 11:24:25.186436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.186986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.186996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.187003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.187012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.187019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.187028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.187036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.187045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.187053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.187062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.187071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.187081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.187088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.227 [2024-12-06 11:24:25.187097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.227 [2024-12-06 11:24:25.187105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.228 [2024-12-06 11:24:25.187727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.228 [2024-12-06 11:24:25.187761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.228 [2024-12-06 11:24:25.187770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.187987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.187994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.188011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.188027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.188044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.188060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.188076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.188093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.229 [2024-12-06 11:24:25.188109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.229 [2024-12-06 11:24:25.188141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.229 [2024-12-06 11:24:25.188148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:8 PRP1 0x0 PRP2 0x0 00:25:30.229 [2024-12-06 11:24:25.188156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188197] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:30.229 [2024-12-06 11:24:25.188218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.229 [2024-12-06 11:24:25.188226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.229 [2024-12-06 11:24:25.188242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.229 [2024-12-06 11:24:25.188257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.229 [2024-12-06 11:24:25.188273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:25.188280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:30.229 [2024-12-06 11:24:25.188316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9d930 (9): Bad file descriptor 00:25:30.229 [2024-12-06 11:24:25.191905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:30.229 [2024-12-06 11:24:25.226614] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:30.229 11139.60 IOPS, 43.51 MiB/s [2024-12-06T10:24:36.396Z] 11168.83 IOPS, 43.63 MiB/s [2024-12-06T10:24:36.396Z] 11263.86 IOPS, 44.00 MiB/s [2024-12-06T10:24:36.396Z] 11283.88 IOPS, 44.08 MiB/s [2024-12-06T10:24:36.396Z] [2024-12-06 11:24:29.569097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.229 [2024-12-06 11:24:29.569133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:29.569144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.229 [2024-12-06 11:24:29.569152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:29.569160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.229 [2024-12-06 11:24:29.569168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:29.569176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.229 [2024-12-06 11:24:29.569184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:29.569191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d930 is same with the state(6) to be set 00:25:30.229 [2024-12-06 11:24:29.569256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.229 [2024-12-06 11:24:29.569272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:29.569286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.229 [2024-12-06 11:24:29.569294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:29.569303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.229 [2024-12-06 11:24:29.569310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:29.569320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.229 [2024-12-06 11:24:29.569327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:29.569337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.229 [2024-12-06 11:24:29.569344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.229 [2024-12-06 11:24:29.569353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.229 [2024-12-06 11:24:29.569361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.569987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.569996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.570004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.570013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.230 [2024-12-06 11:24:29.570020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.230 [2024-12-06 11:24:29.570030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.231 [2024-12-06 11:24:29.570238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.231 [2024-12-06 11:24:29.570603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.231 [2024-12-06 11:24:29.570612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.570988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.570997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.232 [2024-12-06 11:24:29.571261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.232 [2024-12-06 11:24:29.571268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.233 [2024-12-06 11:24:29.571277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.233 [2024-12-06 11:24:29.571285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.233 [2024-12-06 11:24:29.571294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.233 [2024-12-06 11:24:29.571301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.233 [2024-12-06 11:24:29.571310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.233 [2024-12-06 11:24:29.571317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.233 [2024-12-06 11:24:29.571326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.233 [2024-12-06 11:24:29.571334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.233 [2024-12-06 11:24:29.571343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.233 [2024-12-06 11:24:29.571350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.233 [2024-12-06 11:24:29.571359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.233 [2024-12-06 11:24:29.571367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.233 [2024-12-06 11:24:29.571376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.233 [2024-12-06 11:24:29.571385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.233 [2024-12-06 11:24:29.571404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.233 [2024-12-06 11:24:29.571410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.233 [2024-12-06 11:24:29.571417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41656 len:8 PRP1 0x0 PRP2 0x0 00:25:30.233 [2024-12-06 11:24:29.571425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.233 [2024-12-06 11:24:29.571466] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:30.233 [2024-12-06 11:24:29.571477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:30.233 [2024-12-06 11:24:29.575060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:30.233 [2024-12-06 11:24:29.575088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9d930 (9): Bad file descriptor 00:25:30.233 [2024-12-06 11:24:29.641060] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:30.233 11180.44 IOPS, 43.67 MiB/s [2024-12-06T10:24:36.400Z] 11207.80 IOPS, 43.78 MiB/s [2024-12-06T10:24:36.400Z] 11221.82 IOPS, 43.84 MiB/s [2024-12-06T10:24:36.400Z] 11252.08 IOPS, 43.95 MiB/s [2024-12-06T10:24:36.400Z] 11250.08 IOPS, 43.95 MiB/s [2024-12-06T10:24:36.400Z] 11252.07 IOPS, 43.95 MiB/s 00:25:30.233 Latency(us) 00:25:30.233 [2024-12-06T10:24:36.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.233 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:30.233 Verification LBA range: start 0x0 length 0x4000 00:25:30.233 NVMe0n1 : 15.00 11238.20 43.90 377.34 0.00 10991.92 774.83 15728.64 00:25:30.233 [2024-12-06T10:24:36.400Z] =================================================================================================================== 00:25:30.233 [2024-12-06T10:24:36.400Z] Total : 11238.20 43.90 377.34 0.00 10991.92 774.83 15728.64 00:25:30.233 Received shutdown signal, test time was about 15.000000 seconds 00:25:30.233 00:25:30.233 Latency(us) 00:25:30.233 [2024-12-06T10:24:36.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.233 [2024-12-06T10:24:36.400Z] =================================================================================================================== 00:25:30.233 [2024-12-06T10:24:36.400Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3549591 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3549591 /var/tmp/bdevperf.sock 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3549591 ']' 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.233 11:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:30.804 11:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.804 11:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:30.804 11:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:30.804 [2024-12-06 11:24:36.896987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:30.804 11:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:31.064 [2024-12-06 11:24:37.077387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:31.064 11:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:31.325 NVMe0n1 00:25:31.325 11:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:31.585 00:25:31.846 11:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:32.107 00:25:32.107 11:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:32.107 11:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:32.107 11:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:32.367 11:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:35.670 11:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:35.670 11:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:35.670 11:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:35.670 11:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3550611 00:25:35.670 11:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3550611 00:25:36.611 { 00:25:36.611 "results": [ 00:25:36.611 { 00:25:36.611 "job": "NVMe0n1", 00:25:36.611 "core_mask": "0x1", 00:25:36.611 "workload": "verify", 00:25:36.611 "status": "finished", 00:25:36.611 "verify_range": { 00:25:36.611 "start": 0, 00:25:36.611 "length": 16384 00:25:36.611 }, 00:25:36.611 "queue_depth": 128, 00:25:36.611 "io_size": 4096, 00:25:36.611 "runtime": 1.005622, 00:25:36.611 "iops": 11297.485536314838, 00:25:36.611 "mibps": 44.130802876229836, 00:25:36.611 "io_failed": 0, 00:25:36.611 "io_timeout": 0, 00:25:36.611 "avg_latency_us": 11271.0066320453, 00:25:36.611 "min_latency_us": 1112.7466666666667, 00:25:36.611 "max_latency_us": 11304.96 00:25:36.611 } 00:25:36.611 ], 00:25:36.611 "core_count": 1 00:25:36.611 } 00:25:36.611 11:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:36.611 [2024-12-06 11:24:35.947359] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:25:36.611 [2024-12-06 11:24:35.947431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3549591 ] 00:25:36.611 [2024-12-06 11:24:36.034606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.611 [2024-12-06 11:24:36.070093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.611 [2024-12-06 11:24:38.392530] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:36.611 [2024-12-06 11:24:38.392578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.611 [2024-12-06 11:24:38.392590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.611 [2024-12-06 11:24:38.392600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.611 [2024-12-06 11:24:38.392608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.611 [2024-12-06 11:24:38.392616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.611 [2024-12-06 11:24:38.392623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.611 [2024-12-06 11:24:38.392631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.611 [2024-12-06 11:24:38.392638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.611 [2024-12-06 11:24:38.392646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:36.612 [2024-12-06 11:24:38.392674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:36.612 [2024-12-06 11:24:38.392690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115d930 (9): Bad file descriptor 00:25:36.612 [2024-12-06 11:24:38.493978] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:36.612 Running I/O for 1 seconds... 00:25:36.612 11224.00 IOPS, 43.84 MiB/s 00:25:36.612 Latency(us) 00:25:36.612 [2024-12-06T10:24:42.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.612 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:36.612 Verification LBA range: start 0x0 length 0x4000 00:25:36.612 NVMe0n1 : 1.01 11297.49 44.13 0.00 0.00 11271.01 1112.75 11304.96 00:25:36.612 [2024-12-06T10:24:42.779Z] =================================================================================================================== 00:25:36.612 [2024-12-06T10:24:42.779Z] Total : 11297.49 44.13 0.00 0.00 11271.01 1112.75 11304.96 00:25:36.612 11:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:36.612 11:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:36.872 11:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.133 11:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.133 11:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:37.133 11:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.393 11:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3549591 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3549591 ']' 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3549591 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3549591 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3549591' 00:25:40.690 killing process with pid 3549591 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3549591 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3549591 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:40.690 11:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.950 rmmod nvme_tcp 00:25:40.950 rmmod nvme_fabrics 00:25:40.950 rmmod nvme_keyring 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3545879 ']' 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3545879 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3545879 ']' 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3545879 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.950 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3545879 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3545879' 00:25:41.208 killing process with pid 3545879 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3545879 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3545879 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.208 11:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.750 11:24:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:43.750 00:25:43.750 real 0m41.136s 00:25:43.750 user 2m4.132s 00:25:43.750 sys 0m9.130s 00:25:43.750 11:24:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:43.750 11:24:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:43.750 ************************************ 00:25:43.750 END TEST nvmf_failover 00:25:43.750 ************************************ 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.751 ************************************ 00:25:43.751 START TEST nvmf_host_discovery 00:25:43.751 ************************************ 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:43.751 * Looking for test storage... 00:25:43.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:43.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.751 --rc genhtml_branch_coverage=1 00:25:43.751 --rc genhtml_function_coverage=1 00:25:43.751 --rc genhtml_legend=1 00:25:43.751 --rc geninfo_all_blocks=1 00:25:43.751 --rc geninfo_unexecuted_blocks=1 00:25:43.751 00:25:43.751 ' 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:43.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.751 --rc genhtml_branch_coverage=1 00:25:43.751 --rc genhtml_function_coverage=1 00:25:43.751 --rc genhtml_legend=1 00:25:43.751 --rc geninfo_all_blocks=1 00:25:43.751 --rc geninfo_unexecuted_blocks=1 00:25:43.751 00:25:43.751 ' 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:43.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.751 --rc genhtml_branch_coverage=1 00:25:43.751 --rc genhtml_function_coverage=1 00:25:43.751 --rc genhtml_legend=1 00:25:43.751 --rc geninfo_all_blocks=1 00:25:43.751 --rc geninfo_unexecuted_blocks=1 00:25:43.751 00:25:43.751 ' 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:43.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.751 --rc genhtml_branch_coverage=1 00:25:43.751 --rc genhtml_function_coverage=1 00:25:43.751 --rc genhtml_legend=1 00:25:43.751 --rc geninfo_all_blocks=1 00:25:43.751 --rc geninfo_unexecuted_blocks=1 00:25:43.751 00:25:43.751 ' 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.751 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:43.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:43.752 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:52.015 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:52.015 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:52.015 Found net devices under 0000:31:00.0: cvl_0_0 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:52.015 Found net devices under 0000:31:00.1: cvl_0_1 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:52.015 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.016 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.016 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.016 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.016 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:52.016 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:52.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:25:52.016 00:25:52.016 --- 10.0.0.2 ping statistics --- 00:25:52.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.016 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:25:52.016 00:25:52.016 --- 10.0.0.1 ping statistics --- 00:25:52.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.016 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3556401 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3556401 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3556401 ']' 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.016 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.016 [2024-12-06 11:24:58.154937] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:25:52.016 [2024-12-06 11:24:58.154986] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.277 [2024-12-06 11:24:58.259511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.277 [2024-12-06 11:24:58.293907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.277 [2024-12-06 11:24:58.293943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.277 [2024-12-06 11:24:58.293950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.277 [2024-12-06 11:24:58.293957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.277 [2024-12-06 11:24:58.293963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.277 [2024-12-06 11:24:58.294534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.849 [2024-12-06 11:24:58.983772] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.849 [2024-12-06 11:24:58.995992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:52.849 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.849 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:52.849 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.849 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.849 null0 00:25:52.849 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.849 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:52.849 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.849 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.111 null1 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3556670 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3556670 /tmp/host.sock 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3556670 ']' 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:53.111 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.111 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.111 [2024-12-06 11:24:59.099340] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:25:53.111 [2024-12-06 11:24:59.099401] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3556670 ] 00:25:53.111 [2024-12-06 11:24:59.182440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.111 [2024-12-06 11:24:59.224328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:54.051 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.052 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.052 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.313 [2024-12-06 11:25:00.243397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:54.313 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:54.884 [2024-12-06 11:25:00.962264] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:54.884 [2024-12-06 11:25:00.962291] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:54.884 [2024-12-06 11:25:00.962305] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:55.144 [2024-12-06 11:25:01.089673] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:55.144 [2024-12-06 11:25:01.150722] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:55.144 [2024-12-06 11:25:01.151645] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xac02d0:1 started. 00:25:55.144 [2024-12-06 11:25:01.153276] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:55.144 [2024-12-06 11:25:01.153293] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:55.144 [2024-12-06 11:25:01.161260] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xac02d0 was disconnected and freed. delete nvme_qpair. 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:55.405 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:55.406 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:55.406 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.406 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.406 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.668 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.930 [2024-12-06 11:25:01.912898] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xac0710:1 started. 00:25:55.930 [2024-12-06 11:25:01.923124] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xac0710 was disconnected and freed. delete nvme_qpair. 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.930 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.930 [2024-12-06 11:25:02.003869] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:55.930 [2024-12-06 11:25:02.004501] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:55.930 [2024-12-06 11:25:02.004521] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.930 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:56.191 [2024-12-06 11:25:02.132347] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:56.191 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:56.191 [2024-12-06 11:25:02.237278] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:56.191 [2024-12-06 11:25:02.237317] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:56.191 [2024-12-06 11:25:02.237326] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:56.191 [2024-12-06 11:25:02.237332] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.134 [2024-12-06 11:25:03.280080] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:57.134 [2024-12-06 11:25:03.280106] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:57.134 [2024-12-06 11:25:03.287493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.134 [2024-12-06 11:25:03.287515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.134 [2024-12-06 11:25:03.287526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.134 [2024-12-06 11:25:03.287535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.134 [2024-12-06 11:25:03.287543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.134 [2024-12-06 11:25:03.287551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.134 [2024-12-06 11:25:03.287559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.134 [2024-12-06 11:25:03.287571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.134 [2024-12-06 11:25:03.287579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90910 is same with the state(6) to be set 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.134 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.134 [2024-12-06 11:25:03.297505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90910 (9): Bad file descriptor 00:25:57.396 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.396 [2024-12-06 11:25:03.307540] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.396 [2024-12-06 11:25:03.307553] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.396 [2024-12-06 11:25:03.307560] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.396 [2024-12-06 11:25:03.307566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.396 [2024-12-06 11:25:03.307585] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.396 [2024-12-06 11:25:03.308106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.396 [2024-12-06 11:25:03.308146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90910 with addr=10.0.0.2, port=4420 00:25:57.396 [2024-12-06 11:25:03.308157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90910 is same with the state(6) to be set 00:25:57.396 [2024-12-06 11:25:03.308177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90910 (9): Bad file descriptor 00:25:57.396 [2024-12-06 11:25:03.308204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.396 [2024-12-06 11:25:03.308212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.396 [2024-12-06 11:25:03.308221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.396 [2024-12-06 11:25:03.308229] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.396 [2024-12-06 11:25:03.308235] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.396 [2024-12-06 11:25:03.308240] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.396 [2024-12-06 11:25:03.317618] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.396 [2024-12-06 11:25:03.317632] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.396 [2024-12-06 11:25:03.317637] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.396 [2024-12-06 11:25:03.317641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.396 [2024-12-06 11:25:03.317658] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.396 [2024-12-06 11:25:03.318100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.396 [2024-12-06 11:25:03.318138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90910 with addr=10.0.0.2, port=4420 00:25:57.396 [2024-12-06 11:25:03.318149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90910 is same with the state(6) to be set 00:25:57.396 [2024-12-06 11:25:03.318168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90910 (9): Bad file descriptor 00:25:57.396 [2024-12-06 11:25:03.318180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.396 [2024-12-06 11:25:03.318187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.396 [2024-12-06 11:25:03.318195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.396 [2024-12-06 11:25:03.318203] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.396 [2024-12-06 11:25:03.318208] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.396 [2024-12-06 11:25:03.318213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.397 [2024-12-06 11:25:03.327690] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.397 [2024-12-06 11:25:03.327707] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.397 [2024-12-06 11:25:03.327712] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.397 [2024-12-06 11:25:03.327717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.397 [2024-12-06 11:25:03.327734] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.397 [2024-12-06 11:25:03.327945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.397 [2024-12-06 11:25:03.327960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90910 with addr=10.0.0.2, port=4420 00:25:57.397 [2024-12-06 11:25:03.327968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90910 is same with the state(6) to be set 00:25:57.397 [2024-12-06 11:25:03.327979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90910 (9): Bad file descriptor 00:25:57.397 [2024-12-06 11:25:03.327990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.397 [2024-12-06 11:25:03.327997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.397 [2024-12-06 11:25:03.328005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.397 [2024-12-06 11:25:03.328011] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.397 [2024-12-06 11:25:03.328016] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.397 [2024-12-06 11:25:03.328020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.397 [2024-12-06 11:25:03.337765] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.397 [2024-12-06 11:25:03.337778] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.397 [2024-12-06 11:25:03.337792] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.397 [2024-12-06 11:25:03.337797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.397 [2024-12-06 11:25:03.337812] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.397 [2024-12-06 11:25:03.338048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.397 [2024-12-06 11:25:03.338063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90910 with addr=10.0.0.2, port=4420 00:25:57.397 [2024-12-06 11:25:03.338071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90910 is same with the state(6) to be set 00:25:57.397 [2024-12-06 11:25:03.338082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90910 (9): Bad file descriptor 00:25:57.397 [2024-12-06 11:25:03.338092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.397 [2024-12-06 11:25:03.338099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.397 [2024-12-06 11:25:03.338109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.397 [2024-12-06 11:25:03.338115] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.397 [2024-12-06 11:25:03.338120] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.397 [2024-12-06 11:25:03.338124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.397 [2024-12-06 11:25:03.347844] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.397 [2024-12-06 11:25:03.347858] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.397 [2024-12-06 11:25:03.347867] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.397 [2024-12-06 11:25:03.347872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.397 [2024-12-06 11:25:03.347887] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.397 [2024-12-06 11:25:03.348220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.397 [2024-12-06 11:25:03.348233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90910 with addr=10.0.0.2, port=4420 00:25:57.397 [2024-12-06 11:25:03.348240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90910 is same with the state(6) to be set 00:25:57.397 [2024-12-06 11:25:03.348256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90910 (9): Bad file descriptor 00:25:57.397 [2024-12-06 11:25:03.348272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.397 [2024-12-06 11:25:03.348279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.397 [2024-12-06 11:25:03.348287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.397 [2024-12-06 11:25:03.348293] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.397 [2024-12-06 11:25:03.348297] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.397 [2024-12-06 11:25:03.348302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.397 [2024-12-06 11:25:03.357917] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:57.397 [2024-12-06 11:25:03.357930] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:57.397 [2024-12-06 11:25:03.357934] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:57.397 [2024-12-06 11:25:03.357939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:57.397 [2024-12-06 11:25:03.357953] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:57.397 [2024-12-06 11:25:03.358164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.397 [2024-12-06 11:25:03.358175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90910 with addr=10.0.0.2, port=4420 00:25:57.397 [2024-12-06 11:25:03.358183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90910 is same with the state(6) to be set 00:25:57.397 [2024-12-06 11:25:03.358194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90910 (9): Bad file descriptor 00:25:57.397 [2024-12-06 11:25:03.358204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:57.397 [2024-12-06 11:25:03.358211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:57.397 [2024-12-06 11:25:03.358218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:57.397 [2024-12-06 11:25:03.358225] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:57.397 [2024-12-06 11:25:03.358229] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:57.397 [2024-12-06 11:25:03.358234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:57.397 [2024-12-06 11:25:03.367344] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:57.397 [2024-12-06 11:25:03.367365] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.397 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.398 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.658 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.659 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.600 [2024-12-06 11:25:04.723062] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:58.600 [2024-12-06 11:25:04.723079] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:58.600 [2024-12-06 11:25:04.723091] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:58.861 [2024-12-06 11:25:04.811357] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:58.861 [2024-12-06 11:25:04.915197] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:58.861 [2024-12-06 11:25:04.916003] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xa91cd0:1 started. 00:25:58.861 [2024-12-06 11:25:04.917877] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:58.861 [2024-12-06 11:25:04.917904] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.861 [2024-12-06 11:25:04.920723] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xa91cd0 was disconnected and freed. delete nvme_qpair. 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.861 request: 00:25:58.861 { 00:25:58.861 "name": "nvme", 00:25:58.861 "trtype": "tcp", 00:25:58.861 "traddr": "10.0.0.2", 00:25:58.861 "adrfam": "ipv4", 00:25:58.861 "trsvcid": "8009", 00:25:58.861 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:58.861 "wait_for_attach": true, 00:25:58.861 "method": "bdev_nvme_start_discovery", 00:25:58.861 "req_id": 1 00:25:58.861 } 00:25:58.861 Got JSON-RPC error response 00:25:58.861 response: 00:25:58.861 { 00:25:58.861 "code": -17, 00:25:58.861 "message": "File exists" 00:25:58.861 } 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.861 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.861 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.122 request: 00:25:59.122 { 00:25:59.122 "name": "nvme_second", 00:25:59.122 "trtype": "tcp", 00:25:59.122 "traddr": "10.0.0.2", 00:25:59.122 "adrfam": "ipv4", 00:25:59.122 "trsvcid": "8009", 00:25:59.122 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:59.122 "wait_for_attach": true, 00:25:59.122 "method": "bdev_nvme_start_discovery", 00:25:59.122 "req_id": 1 00:25:59.122 } 00:25:59.122 Got JSON-RPC error response 00:25:59.122 response: 00:25:59.122 { 00:25:59.122 "code": -17, 00:25:59.122 "message": "File exists" 00:25:59.122 } 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.122 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.062 [2024-12-06 11:25:06.166145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.062 [2024-12-06 11:25:06.166173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac1b30 with addr=10.0.0.2, port=8010 00:26:00.063 [2024-12-06 11:25:06.166190] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:00.063 [2024-12-06 11:25:06.166198] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:00.063 [2024-12-06 11:25:06.166205] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:01.003 [2024-12-06 11:25:07.168466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.003 [2024-12-06 11:25:07.168489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabfc60 with addr=10.0.0.2, port=8010 00:26:01.003 [2024-12-06 11:25:07.168499] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:01.003 [2024-12-06 11:25:07.168506] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:01.003 [2024-12-06 11:25:07.168512] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:02.384 [2024-12-06 11:25:08.170454] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:02.384 request: 00:26:02.384 { 00:26:02.384 "name": "nvme_second", 00:26:02.384 "trtype": "tcp", 00:26:02.384 "traddr": "10.0.0.2", 00:26:02.384 "adrfam": "ipv4", 00:26:02.384 "trsvcid": "8010", 00:26:02.384 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:02.384 "wait_for_attach": false, 00:26:02.384 "attach_timeout_ms": 3000, 00:26:02.384 "method": "bdev_nvme_start_discovery", 00:26:02.384 "req_id": 1 00:26:02.384 } 00:26:02.384 Got JSON-RPC error response 00:26:02.384 response: 00:26:02.384 { 00:26:02.384 "code": -110, 00:26:02.384 "message": "Connection timed out" 00:26:02.384 } 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3556670 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.384 rmmod nvme_tcp 00:26:02.384 rmmod nvme_fabrics 00:26:02.384 rmmod nvme_keyring 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3556401 ']' 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3556401 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3556401 ']' 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3556401 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3556401 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3556401' 00:26:02.384 killing process with pid 3556401 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3556401 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3556401 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:02.384 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.385 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.385 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:02.385 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.385 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.385 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.385 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.385 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:04.924 00:26:04.924 real 0m21.106s 00:26:04.924 user 0m23.537s 00:26:04.924 sys 0m7.900s 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.924 ************************************ 00:26:04.924 END TEST nvmf_host_discovery 00:26:04.924 ************************************ 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.924 ************************************ 00:26:04.924 START TEST nvmf_host_multipath_status 00:26:04.924 ************************************ 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:04.924 * Looking for test storage... 00:26:04.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:04.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.924 --rc genhtml_branch_coverage=1 00:26:04.924 --rc genhtml_function_coverage=1 00:26:04.924 --rc genhtml_legend=1 00:26:04.924 --rc geninfo_all_blocks=1 00:26:04.924 --rc geninfo_unexecuted_blocks=1 00:26:04.924 00:26:04.924 ' 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:04.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.924 --rc genhtml_branch_coverage=1 00:26:04.924 --rc genhtml_function_coverage=1 00:26:04.924 --rc genhtml_legend=1 00:26:04.924 --rc geninfo_all_blocks=1 00:26:04.924 --rc geninfo_unexecuted_blocks=1 00:26:04.924 00:26:04.924 ' 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:04.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.924 --rc genhtml_branch_coverage=1 00:26:04.924 --rc genhtml_function_coverage=1 00:26:04.924 --rc genhtml_legend=1 00:26:04.924 --rc geninfo_all_blocks=1 00:26:04.924 --rc geninfo_unexecuted_blocks=1 00:26:04.924 00:26:04.924 ' 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:04.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.924 --rc genhtml_branch_coverage=1 00:26:04.924 --rc genhtml_function_coverage=1 00:26:04.924 --rc genhtml_legend=1 00:26:04.924 --rc geninfo_all_blocks=1 00:26:04.924 --rc geninfo_unexecuted_blocks=1 00:26:04.924 00:26:04.924 ' 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.924 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:04.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.925 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:13.063 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:13.063 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:13.063 Found net devices under 0000:31:00.0: cvl_0_0 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:13.063 Found net devices under 0000:31:00.1: cvl_0_1 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.063 11:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.063 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.063 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.063 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:13.063 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:13.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:26:13.325 00:26:13.325 --- 10.0.0.2 ping statistics --- 00:26:13.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.325 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:26:13.325 00:26:13.325 --- 10.0.0.1 ping statistics --- 00:26:13.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.325 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3563246 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3563246 00:26:13.325 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:13.326 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3563246 ']' 00:26:13.326 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.326 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.326 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.326 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.326 11:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.326 [2024-12-06 11:25:19.409835] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:26:13.326 [2024-12-06 11:25:19.409919] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.587 [2024-12-06 11:25:19.502156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:13.587 [2024-12-06 11:25:19.542749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.587 [2024-12-06 11:25:19.542785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.587 [2024-12-06 11:25:19.542793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.587 [2024-12-06 11:25:19.542800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.587 [2024-12-06 11:25:19.542806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.587 [2024-12-06 11:25:19.544126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.587 [2024-12-06 11:25:19.544132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.157 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.157 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:14.157 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:14.157 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.157 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:14.157 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.157 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3563246 00:26:14.157 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:14.418 [2024-12-06 11:25:20.411788] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.418 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:14.678 Malloc0 00:26:14.678 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:14.678 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.938 11:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.199 [2024-12-06 11:25:21.111292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.199 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:15.199 [2024-12-06 11:25:21.267666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:15.199 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3563705 00:26:15.199 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:15.199 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:15.199 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3563705 /var/tmp/bdevperf.sock 00:26:15.199 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3563705 ']' 00:26:15.199 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:15.199 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.199 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:15.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:15.199 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.199 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.460 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.460 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:15.460 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:15.720 11:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:15.980 Nvme0n1 00:26:16.240 11:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:16.500 Nvme0n1 00:26:16.500 11:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:16.500 11:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:18.417 11:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:18.417 11:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:18.678 11:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.939 11:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:19.878 11:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:19.878 11:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:19.878 11:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.878 11:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.138 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.138 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:20.138 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.138 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.398 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.398 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.398 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.398 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.398 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.398 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.398 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.398 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.657 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.657 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.657 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.657 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.916 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.916 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.916 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.916 11:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.916 11:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.916 11:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:20.916 11:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.175 11:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.434 11:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:22.374 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:22.374 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:22.374 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.374 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.635 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.635 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.635 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.635 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.635 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.635 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.635 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.635 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.895 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.895 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.895 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.895 11:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.155 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.155 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.155 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.155 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.416 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.416 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.416 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.416 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.416 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.416 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:23.416 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.677 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:23.937 11:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:24.877 11:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:24.877 11:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:24.877 11:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.877 11:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.137 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.137 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:25.137 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.137 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.137 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.137 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.137 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.137 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.398 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.398 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.398 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.398 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.659 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.659 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.659 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.659 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.919 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.919 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.919 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.919 11:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.919 11:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.919 11:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:25.920 11:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.181 11:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:26.441 11:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:27.383 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:27.383 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.383 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.383 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.644 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.644 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.644 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.644 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.644 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.644 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.644 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.644 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.904 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.904 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.904 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.904 11:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.163 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.163 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.163 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.164 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.164 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.164 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.164 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.164 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.424 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.424 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:28.424 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:28.683 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:28.942 11:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:29.880 11:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:29.880 11:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:29.880 11:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.880 11:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:30.140 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.140 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:30.140 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.140 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:30.140 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.140 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:30.140 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.140 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.400 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.400 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.400 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.400 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.660 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.660 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:30.660 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.660 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.660 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.660 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:30.660 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.660 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.921 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.921 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:30.921 11:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:31.181 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:31.181 11:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:32.562 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:32.562 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:32.562 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.562 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:32.562 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.563 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:32.563 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.563 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.563 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.563 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.563 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.563 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.823 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.823 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.823 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.823 11:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:33.084 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.084 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:33.084 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.084 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:33.344 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.344 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:33.344 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.344 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:33.344 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.344 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:33.604 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:33.604 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:33.864 11:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:33.864 11:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.247 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:35.507 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.507 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:35.507 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.507 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.767 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.767 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:35.767 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.767 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:36.027 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.027 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:36.027 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.027 11:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:36.027 11:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.027 11:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:36.027 11:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:36.293 11:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:36.622 11:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:37.667 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:37.667 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:37.667 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.667 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.667 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.667 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:37.667 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.667 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.928 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.928 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.928 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.928 11:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.928 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.928 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.928 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.928 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.190 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.190 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:38.190 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.190 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:38.451 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.451 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:38.451 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.451 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:38.712 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.712 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:38.712 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:38.712 11:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:38.972 11:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:39.914 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:39.914 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:39.914 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.914 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:40.175 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.176 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:40.176 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.176 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:40.436 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.436 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:40.436 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.436 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.436 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.436 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.436 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.436 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.697 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.698 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.698 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.698 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.959 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.959 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:40.959 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.959 11:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.220 11:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.220 11:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:41.220 11:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:41.220 11:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:41.482 11:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:42.422 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:42.422 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:42.422 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.422 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.682 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.682 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:42.682 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.682 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.942 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.942 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.942 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.942 11:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.942 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.942 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.943 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.943 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:43.203 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.203 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:43.203 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.203 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:43.463 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.463 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:43.463 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.463 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:43.463 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:43.463 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3563705 00:26:43.463 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3563705 ']' 00:26:43.463 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3563705 00:26:43.727 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:43.727 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.727 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3563705 00:26:43.727 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:43.727 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:43.727 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3563705' 00:26:43.727 killing process with pid 3563705 00:26:43.727 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3563705 00:26:43.727 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3563705 00:26:43.727 { 00:26:43.727 "results": [ 00:26:43.727 { 00:26:43.727 "job": "Nvme0n1", 00:26:43.727 "core_mask": "0x4", 00:26:43.727 "workload": "verify", 00:26:43.727 "status": "terminated", 00:26:43.727 "verify_range": { 00:26:43.727 "start": 0, 00:26:43.727 "length": 16384 00:26:43.727 }, 00:26:43.727 "queue_depth": 128, 00:26:43.727 "io_size": 4096, 00:26:43.727 "runtime": 26.987652, 00:26:43.727 "iops": 10821.76396820294, 00:26:43.727 "mibps": 42.272515500792736, 00:26:43.727 "io_failed": 0, 00:26:43.727 "io_timeout": 0, 00:26:43.727 "avg_latency_us": 11810.428596355468, 00:26:43.727 "min_latency_us": 226.98666666666668, 00:26:43.727 "max_latency_us": 3019898.88 00:26:43.727 } 00:26:43.727 ], 00:26:43.727 "core_count": 1 00:26:43.727 } 00:26:43.727 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3563705 00:26:43.727 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:43.727 [2024-12-06 11:25:21.333342] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:26:43.727 [2024-12-06 11:25:21.333404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3563705 ] 00:26:43.727 [2024-12-06 11:25:21.398168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.727 [2024-12-06 11:25:21.427219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.727 Running I/O for 90 seconds... 00:26:43.727 9536.00 IOPS, 37.25 MiB/s [2024-12-06T10:25:49.894Z] 9602.00 IOPS, 37.51 MiB/s [2024-12-06T10:25:49.894Z] 9615.67 IOPS, 37.56 MiB/s [2024-12-06T10:25:49.894Z] 9654.75 IOPS, 37.71 MiB/s [2024-12-06T10:25:49.894Z] 9946.20 IOPS, 38.85 MiB/s [2024-12-06T10:25:49.894Z] 10407.67 IOPS, 40.65 MiB/s [2024-12-06T10:25:49.894Z] 10803.14 IOPS, 42.20 MiB/s [2024-12-06T10:25:49.894Z] 10780.50 IOPS, 42.11 MiB/s [2024-12-06T10:25:49.894Z] 10668.33 IOPS, 41.67 MiB/s [2024-12-06T10:25:49.894Z] 10568.20 IOPS, 41.28 MiB/s [2024-12-06T10:25:49.894Z] 10491.09 IOPS, 40.98 MiB/s [2024-12-06T10:25:49.894Z] [2024-12-06 11:25:34.647151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.727 [2024-12-06 11:25:34.647185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.727 [2024-12-06 11:25:34.647398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.727 [2024-12-06 11:25:34.647403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.728 [2024-12-06 11:25:34.647466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.647993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.647998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.648011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.648016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.648027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.648032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.648044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.648049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.648060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.648065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.648076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.648081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.728 [2024-12-06 11:25:34.648092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.728 [2024-12-06 11:25:34.648097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.729 [2024-12-06 11:25:34.648853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.729 [2024-12-06 11:25:34.648859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.648882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.648890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.648904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.648909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.648922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.648928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.648941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.648947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.648960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.648965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.648980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.648986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.730 [2024-12-06 11:25:34.649317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.730 [2024-12-06 11:25:34.649337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.730 [2024-12-06 11:25:34.649360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.730 [2024-12-06 11:25:34.649381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.730 [2024-12-06 11:25:34.649401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.730 [2024-12-06 11:25:34.649422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.730 [2024-12-06 11:25:34.649442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.730 [2024-12-06 11:25:34.649805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.730 [2024-12-06 11:25:34.649811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.730 10387.08 IOPS, 40.57 MiB/s [2024-12-06T10:25:49.897Z] 9588.08 IOPS, 37.45 MiB/s [2024-12-06T10:25:49.897Z] 8903.21 IOPS, 34.78 MiB/s [2024-12-06T10:25:49.897Z] 8353.13 IOPS, 32.63 MiB/s [2024-12-06T10:25:49.897Z] 8642.50 IOPS, 33.76 MiB/s [2024-12-06T10:25:49.897Z] 8875.53 IOPS, 34.67 MiB/s [2024-12-06T10:25:49.898Z] 9284.56 IOPS, 36.27 MiB/s [2024-12-06T10:25:49.898Z] 9687.37 IOPS, 37.84 MiB/s [2024-12-06T10:25:49.898Z] 9981.00 IOPS, 38.99 MiB/s [2024-12-06T10:25:49.898Z] 10123.76 IOPS, 39.55 MiB/s [2024-12-06T10:25:49.898Z] 10244.36 IOPS, 40.02 MiB/s [2024-12-06T10:25:49.898Z] 10471.96 IOPS, 40.91 MiB/s [2024-12-06T10:25:49.898Z] 10740.62 IOPS, 41.96 MiB/s [2024-12-06T10:25:49.898Z] [2024-12-06 11:25:47.482650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.482687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.482718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.482725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.482736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.482752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.482762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.482767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.482778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.482783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.482793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.482799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.482809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.482815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.482825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.482830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.731 [2024-12-06 11:25:47.483053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.731 [2024-12-06 11:25:47.483070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.731 [2024-12-06 11:25:47.483092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.731 [2024-12-06 11:25:47.483107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.483123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.483139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.483157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.483173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.483188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.483205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.731 [2024-12-06 11:25:47.483794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.483811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.483827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.731 [2024-12-06 11:25:47.483838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.731 [2024-12-06 11:25:47.483843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.731 10916.64 IOPS, 42.64 MiB/s [2024-12-06T10:25:49.898Z] 10868.58 IOPS, 42.46 MiB/s [2024-12-06T10:25:49.898Z] Received shutdown signal, test time was about 26.988261 seconds 00:26:43.731 00:26:43.731 Latency(us) 00:26:43.731 [2024-12-06T10:25:49.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.731 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:43.731 Verification LBA range: start 0x0 length 0x4000 00:26:43.731 Nvme0n1 : 26.99 10821.76 42.27 0.00 0.00 11810.43 226.99 3019898.88 00:26:43.731 [2024-12-06T10:25:49.898Z] =================================================================================================================== 00:26:43.731 [2024-12-06T10:25:49.898Z] Total : 10821.76 42.27 0.00 0.00 11810.43 226.99 3019898.88 00:26:43.731 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:43.992 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:43.992 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:43.992 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:43.992 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:43.992 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:43.992 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:43.992 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:43.992 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:43.992 11:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:43.992 rmmod nvme_tcp 00:26:43.992 rmmod nvme_fabrics 00:26:43.992 rmmod nvme_keyring 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3563246 ']' 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3563246 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3563246 ']' 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3563246 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3563246 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3563246' 00:26:43.992 killing process with pid 3563246 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3563246 00:26:43.992 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3563246 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.253 11:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:46.797 00:26:46.797 real 0m41.709s 00:26:46.797 user 1m44.826s 00:26:46.797 sys 0m12.456s 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:46.797 ************************************ 00:26:46.797 END TEST nvmf_host_multipath_status 00:26:46.797 ************************************ 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.797 ************************************ 00:26:46.797 START TEST nvmf_discovery_remove_ifc 00:26:46.797 ************************************ 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.797 * Looking for test storage... 00:26:46.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:46.797 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:46.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.798 --rc genhtml_branch_coverage=1 00:26:46.798 --rc genhtml_function_coverage=1 00:26:46.798 --rc genhtml_legend=1 00:26:46.798 --rc geninfo_all_blocks=1 00:26:46.798 --rc geninfo_unexecuted_blocks=1 00:26:46.798 00:26:46.798 ' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:46.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.798 --rc genhtml_branch_coverage=1 00:26:46.798 --rc genhtml_function_coverage=1 00:26:46.798 --rc genhtml_legend=1 00:26:46.798 --rc geninfo_all_blocks=1 00:26:46.798 --rc geninfo_unexecuted_blocks=1 00:26:46.798 00:26:46.798 ' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:46.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.798 --rc genhtml_branch_coverage=1 00:26:46.798 --rc genhtml_function_coverage=1 00:26:46.798 --rc genhtml_legend=1 00:26:46.798 --rc geninfo_all_blocks=1 00:26:46.798 --rc geninfo_unexecuted_blocks=1 00:26:46.798 00:26:46.798 ' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:46.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.798 --rc genhtml_branch_coverage=1 00:26:46.798 --rc genhtml_function_coverage=1 00:26:46.798 --rc genhtml_legend=1 00:26:46.798 --rc geninfo_all_blocks=1 00:26:46.798 --rc geninfo_unexecuted_blocks=1 00:26:46.798 00:26:46.798 ' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:46.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:46.798 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:46.799 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.799 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.799 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.799 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:46.799 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:46.799 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:46.799 11:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:54.937 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:54.937 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:54.937 Found net devices under 0000:31:00.0: cvl_0_0 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:54.937 Found net devices under 0000:31:00.1: cvl_0_1 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.937 11:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.937 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.937 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.937 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:54.937 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:55.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:26:55.197 00:26:55.197 --- 10.0.0.2 ping statistics --- 00:26:55.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.197 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:26:55.197 00:26:55.197 --- 10.0.0.1 ping statistics --- 00:26:55.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.197 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3574137 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3574137 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3574137 ']' 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.197 11:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.197 [2024-12-06 11:26:01.270926] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:26:55.197 [2024-12-06 11:26:01.270981] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.457 [2024-12-06 11:26:01.379321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.457 [2024-12-06 11:26:01.429040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.457 [2024-12-06 11:26:01.429097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.457 [2024-12-06 11:26:01.429105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.457 [2024-12-06 11:26:01.429113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.457 [2024-12-06 11:26:01.429119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.457 [2024-12-06 11:26:01.429931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.028 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.028 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:56.028 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.028 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:56.028 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.028 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.028 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:56.028 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.028 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.028 [2024-12-06 11:26:02.139959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.028 [2024-12-06 11:26:02.148217] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:56.028 null0 00:26:56.028 [2024-12-06 11:26:02.180144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.289 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.289 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3574393 00:26:56.289 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3574393 /tmp/host.sock 00:26:56.289 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:56.289 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3574393 ']' 00:26:56.289 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:56.290 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.290 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:56.290 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:56.290 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.290 11:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.290 [2024-12-06 11:26:02.266963] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:26:56.290 [2024-12-06 11:26:02.267030] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3574393 ] 00:26:56.290 [2024-12-06 11:26:02.349621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.290 [2024-12-06 11:26:02.392336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.231 11:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.172 [2024-12-06 11:26:04.168623] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:58.172 [2024-12-06 11:26:04.168644] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:58.172 [2024-12-06 11:26:04.168658] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:58.172 [2024-12-06 11:26:04.299078] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:58.432 [2024-12-06 11:26:04.483196] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:58.432 [2024-12-06 11:26:04.484329] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1b2e250:1 started. 00:26:58.432 [2024-12-06 11:26:04.485910] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:58.432 [2024-12-06 11:26:04.485956] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:58.432 [2024-12-06 11:26:04.485978] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:58.432 [2024-12-06 11:26:04.485992] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:58.432 [2024-12-06 11:26:04.486016] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.433 [2024-12-06 11:26:04.529606] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1b2e250 was disconnected and freed. delete nvme_qpair. 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:58.433 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:58.693 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:58.693 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.693 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.693 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.693 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.693 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.693 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.693 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.693 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.693 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:58.693 11:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.633 11:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.633 11:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.633 11:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.633 11:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.633 11:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.633 11:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.633 11:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.633 11:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.633 11:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.633 11:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.016 11:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.016 11:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.016 11:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.016 11:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.016 11:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.016 11:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.016 11:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.016 11:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.016 11:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.016 11:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.956 11:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.956 11:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.956 11:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.956 11:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.956 11:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.956 11:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.956 11:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.956 11:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.956 11:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.956 11:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.897 11:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.897 11:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.897 11:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.897 11:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.897 11:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.897 11:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.897 11:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:02.897 11:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.897 11:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:02.897 11:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:03.837 [2024-12-06 11:26:09.926408] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:03.837 [2024-12-06 11:26:09.926455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.837 [2024-12-06 11:26:09.926468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.837 [2024-12-06 11:26:09.926479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.837 [2024-12-06 11:26:09.926491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.837 [2024-12-06 11:26:09.926499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.837 [2024-12-06 11:26:09.926506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.837 [2024-12-06 11:26:09.926514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.837 [2024-12-06 11:26:09.926521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.837 [2024-12-06 11:26:09.926530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.837 [2024-12-06 11:26:09.926538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.838 [2024-12-06 11:26:09.926545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ac60 is same with the state(6) to be set 00:27:03.838 [2024-12-06 11:26:09.936428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0ac60 (9): Bad file descriptor 00:27:03.838 [2024-12-06 11:26:09.946464] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:03.838 [2024-12-06 11:26:09.946483] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:03.838 [2024-12-06 11:26:09.946490] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:03.838 [2024-12-06 11:26:09.946495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:03.838 [2024-12-06 11:26:09.946517] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:03.838 11:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:03.838 11:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.838 11:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:03.838 11:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.838 11:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.838 11:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:03.838 11:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.220 [2024-12-06 11:26:10.971910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:05.220 [2024-12-06 11:26:10.971961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0ac60 with addr=10.0.0.2, port=4420 00:27:05.220 [2024-12-06 11:26:10.971979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0ac60 is same with the state(6) to be set 00:27:05.220 [2024-12-06 11:26:10.972015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0ac60 (9): Bad file descriptor 00:27:05.220 [2024-12-06 11:26:10.972421] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:05.220 [2024-12-06 11:26:10.972448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:05.220 [2024-12-06 11:26:10.972457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:05.220 [2024-12-06 11:26:10.972467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:05.220 [2024-12-06 11:26:10.972480] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:05.220 [2024-12-06 11:26:10.972487] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:05.220 [2024-12-06 11:26:10.972492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:05.220 [2024-12-06 11:26:10.972500] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:05.220 [2024-12-06 11:26:10.972506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:05.220 11:26:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.220 11:26:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:05.220 11:26:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.161 [2024-12-06 11:26:11.974878] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:06.161 [2024-12-06 11:26:11.974900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:06.161 [2024-12-06 11:26:11.974911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:06.161 [2024-12-06 11:26:11.974919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:06.161 [2024-12-06 11:26:11.974928] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:06.161 [2024-12-06 11:26:11.974936] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:06.161 [2024-12-06 11:26:11.974942] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:06.161 [2024-12-06 11:26:11.974947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:06.161 [2024-12-06 11:26:11.974972] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:06.161 [2024-12-06 11:26:11.974998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.161 [2024-12-06 11:26:11.975009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.161 [2024-12-06 11:26:11.975021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.161 [2024-12-06 11:26:11.975028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.161 [2024-12-06 11:26:11.975037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.161 [2024-12-06 11:26:11.975045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.161 [2024-12-06 11:26:11.975053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.161 [2024-12-06 11:26:11.975061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.161 [2024-12-06 11:26:11.975070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.161 [2024-12-06 11:26:11.975077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.161 [2024-12-06 11:26:11.975085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:06.161 [2024-12-06 11:26:11.975314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af9fa0 (9): Bad file descriptor 00:27:06.161 [2024-12-06 11:26:11.976329] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:06.162 [2024-12-06 11:26:11.976341] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:06.162 11:26:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.103 11:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.103 11:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.103 11:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.103 11:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.103 11:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.103 11:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.103 11:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.103 11:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.103 11:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:07.103 11:26:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.045 [2024-12-06 11:26:14.034781] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:08.045 [2024-12-06 11:26:14.034807] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:08.045 [2024-12-06 11:26:14.034821] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:08.045 [2024-12-06 11:26:14.164208] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:08.305 [2024-12-06 11:26:14.221924] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:08.305 [2024-12-06 11:26:14.222732] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1b15520:1 started. 00:27:08.305 [2024-12-06 11:26:14.223964] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:08.305 [2024-12-06 11:26:14.223998] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:08.306 [2024-12-06 11:26:14.224017] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:08.306 [2024-12-06 11:26:14.224032] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:08.306 [2024-12-06 11:26:14.224040] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:08.306 [2024-12-06 11:26:14.231888] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1b15520 was disconnected and freed. delete nvme_qpair. 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3574393 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3574393 ']' 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3574393 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3574393 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3574393' 00:27:08.306 killing process with pid 3574393 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3574393 00:27:08.306 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3574393 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.567 rmmod nvme_tcp 00:27:08.567 rmmod nvme_fabrics 00:27:08.567 rmmod nvme_keyring 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3574137 ']' 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3574137 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3574137 ']' 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3574137 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3574137 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3574137' 00:27:08.567 killing process with pid 3574137 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3574137 00:27:08.567 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3574137 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.828 11:26:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.749 11:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.749 00:27:10.749 real 0m24.404s 00:27:10.749 user 0m27.579s 00:27:10.749 sys 0m7.919s 00:27:10.749 11:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.749 11:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.749 ************************************ 00:27:10.749 END TEST nvmf_discovery_remove_ifc 00:27:10.749 ************************************ 00:27:10.749 11:26:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:10.749 11:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:10.749 11:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.749 11:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.010 ************************************ 00:27:11.010 START TEST nvmf_identify_kernel_target 00:27:11.010 ************************************ 00:27:11.010 11:26:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:11.010 * Looking for test storage... 00:27:11.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:11.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.010 --rc genhtml_branch_coverage=1 00:27:11.010 --rc genhtml_function_coverage=1 00:27:11.010 --rc genhtml_legend=1 00:27:11.010 --rc geninfo_all_blocks=1 00:27:11.010 --rc geninfo_unexecuted_blocks=1 00:27:11.010 00:27:11.010 ' 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:11.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.010 --rc genhtml_branch_coverage=1 00:27:11.010 --rc genhtml_function_coverage=1 00:27:11.010 --rc genhtml_legend=1 00:27:11.010 --rc geninfo_all_blocks=1 00:27:11.010 --rc geninfo_unexecuted_blocks=1 00:27:11.010 00:27:11.010 ' 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:11.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.010 --rc genhtml_branch_coverage=1 00:27:11.010 --rc genhtml_function_coverage=1 00:27:11.010 --rc genhtml_legend=1 00:27:11.010 --rc geninfo_all_blocks=1 00:27:11.010 --rc geninfo_unexecuted_blocks=1 00:27:11.010 00:27:11.010 ' 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:11.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.010 --rc genhtml_branch_coverage=1 00:27:11.010 --rc genhtml_function_coverage=1 00:27:11.010 --rc genhtml_legend=1 00:27:11.010 --rc geninfo_all_blocks=1 00:27:11.010 --rc geninfo_unexecuted_blocks=1 00:27:11.010 00:27:11.010 ' 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.010 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:11.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:11.011 11:26:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.145 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.145 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.145 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.145 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.145 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.146 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:19.147 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:19.147 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.147 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:19.148 Found net devices under 0000:31:00.0: cvl_0_0 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:19.148 Found net devices under 0000:31:00.1: cvl_0_1 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:19.148 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:19.149 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.150 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:19.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:27:19.413 00:27:19.413 --- 10.0.0.2 ping statistics --- 00:27:19.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.413 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:27:19.413 00:27:19.413 --- 10.0.0.1 ping statistics --- 00:27:19.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.413 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:19.413 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:19.673 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:19.673 11:26:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:23.882 Waiting for block devices as requested 00:27:23.882 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:23.882 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:23.882 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:23.882 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:23.882 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:23.882 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:23.882 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:24.144 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:24.144 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:24.405 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:24.405 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:24.405 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:24.405 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:24.667 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:24.667 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:24.667 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:24.928 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:25.189 No valid GPT data, bailing 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:25.189 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:25.190 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:25.190 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:25.190 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:25.190 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:25.190 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:25.190 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:25.190 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:25.190 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:25.453 00:27:25.453 Discovery Log Number of Records 2, Generation counter 2 00:27:25.453 =====Discovery Log Entry 0====== 00:27:25.453 trtype: tcp 00:27:25.453 adrfam: ipv4 00:27:25.453 subtype: current discovery subsystem 00:27:25.453 treq: not specified, sq flow control disable supported 00:27:25.453 portid: 1 00:27:25.453 trsvcid: 4420 00:27:25.453 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:25.453 traddr: 10.0.0.1 00:27:25.453 eflags: none 00:27:25.453 sectype: none 00:27:25.453 =====Discovery Log Entry 1====== 00:27:25.453 trtype: tcp 00:27:25.453 adrfam: ipv4 00:27:25.453 subtype: nvme subsystem 00:27:25.453 treq: not specified, sq flow control disable supported 00:27:25.453 portid: 1 00:27:25.453 trsvcid: 4420 00:27:25.453 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:25.453 traddr: 10.0.0.1 00:27:25.453 eflags: none 00:27:25.453 sectype: none 00:27:25.453 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:25.453 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:25.453 ===================================================== 00:27:25.453 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:25.453 ===================================================== 00:27:25.453 Controller Capabilities/Features 00:27:25.453 ================================ 00:27:25.453 Vendor ID: 0000 00:27:25.453 Subsystem Vendor ID: 0000 00:27:25.453 Serial Number: 95e95bf6cf816333da0c 00:27:25.453 Model Number: Linux 00:27:25.453 Firmware Version: 6.8.9-20 00:27:25.453 Recommended Arb Burst: 0 00:27:25.453 IEEE OUI Identifier: 00 00 00 00:27:25.453 Multi-path I/O 00:27:25.453 May have multiple subsystem ports: No 00:27:25.453 May have multiple controllers: No 00:27:25.453 Associated with SR-IOV VF: No 00:27:25.453 Max Data Transfer Size: Unlimited 00:27:25.453 Max Number of Namespaces: 0 00:27:25.453 Max Number of I/O Queues: 1024 00:27:25.453 NVMe Specification Version (VS): 1.3 00:27:25.453 NVMe Specification Version (Identify): 1.3 00:27:25.453 Maximum Queue Entries: 1024 00:27:25.453 Contiguous Queues Required: No 00:27:25.453 Arbitration Mechanisms Supported 00:27:25.453 Weighted Round Robin: Not Supported 00:27:25.453 Vendor Specific: Not Supported 00:27:25.453 Reset Timeout: 7500 ms 00:27:25.453 Doorbell Stride: 4 bytes 00:27:25.453 NVM Subsystem Reset: Not Supported 00:27:25.453 Command Sets Supported 00:27:25.453 NVM Command Set: Supported 00:27:25.453 Boot Partition: Not Supported 00:27:25.453 Memory Page Size Minimum: 4096 bytes 00:27:25.453 Memory Page Size Maximum: 4096 bytes 00:27:25.453 Persistent Memory Region: Not Supported 00:27:25.453 Optional Asynchronous Events Supported 00:27:25.453 Namespace Attribute Notices: Not Supported 00:27:25.453 Firmware Activation Notices: Not Supported 00:27:25.453 ANA Change Notices: Not Supported 00:27:25.453 PLE Aggregate Log Change Notices: Not Supported 00:27:25.453 LBA Status Info Alert Notices: Not Supported 00:27:25.453 EGE Aggregate Log Change Notices: Not Supported 00:27:25.453 Normal NVM Subsystem Shutdown event: Not Supported 00:27:25.453 Zone Descriptor Change Notices: Not Supported 00:27:25.453 Discovery Log Change Notices: Supported 00:27:25.453 Controller Attributes 00:27:25.453 128-bit Host Identifier: Not Supported 00:27:25.453 Non-Operational Permissive Mode: Not Supported 00:27:25.453 NVM Sets: Not Supported 00:27:25.453 Read Recovery Levels: Not Supported 00:27:25.453 Endurance Groups: Not Supported 00:27:25.453 Predictable Latency Mode: Not Supported 00:27:25.453 Traffic Based Keep ALive: Not Supported 00:27:25.453 Namespace Granularity: Not Supported 00:27:25.453 SQ Associations: Not Supported 00:27:25.453 UUID List: Not Supported 00:27:25.453 Multi-Domain Subsystem: Not Supported 00:27:25.453 Fixed Capacity Management: Not Supported 00:27:25.453 Variable Capacity Management: Not Supported 00:27:25.453 Delete Endurance Group: Not Supported 00:27:25.453 Delete NVM Set: Not Supported 00:27:25.453 Extended LBA Formats Supported: Not Supported 00:27:25.453 Flexible Data Placement Supported: Not Supported 00:27:25.453 00:27:25.453 Controller Memory Buffer Support 00:27:25.453 ================================ 00:27:25.453 Supported: No 00:27:25.453 00:27:25.453 Persistent Memory Region Support 00:27:25.453 ================================ 00:27:25.453 Supported: No 00:27:25.453 00:27:25.453 Admin Command Set Attributes 00:27:25.453 ============================ 00:27:25.453 Security Send/Receive: Not Supported 00:27:25.453 Format NVM: Not Supported 00:27:25.453 Firmware Activate/Download: Not Supported 00:27:25.453 Namespace Management: Not Supported 00:27:25.453 Device Self-Test: Not Supported 00:27:25.453 Directives: Not Supported 00:27:25.453 NVMe-MI: Not Supported 00:27:25.453 Virtualization Management: Not Supported 00:27:25.453 Doorbell Buffer Config: Not Supported 00:27:25.453 Get LBA Status Capability: Not Supported 00:27:25.453 Command & Feature Lockdown Capability: Not Supported 00:27:25.453 Abort Command Limit: 1 00:27:25.453 Async Event Request Limit: 1 00:27:25.453 Number of Firmware Slots: N/A 00:27:25.453 Firmware Slot 1 Read-Only: N/A 00:27:25.453 Firmware Activation Without Reset: N/A 00:27:25.453 Multiple Update Detection Support: N/A 00:27:25.453 Firmware Update Granularity: No Information Provided 00:27:25.453 Per-Namespace SMART Log: No 00:27:25.453 Asymmetric Namespace Access Log Page: Not Supported 00:27:25.453 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:25.453 Command Effects Log Page: Not Supported 00:27:25.453 Get Log Page Extended Data: Supported 00:27:25.453 Telemetry Log Pages: Not Supported 00:27:25.453 Persistent Event Log Pages: Not Supported 00:27:25.453 Supported Log Pages Log Page: May Support 00:27:25.453 Commands Supported & Effects Log Page: Not Supported 00:27:25.453 Feature Identifiers & Effects Log Page:May Support 00:27:25.453 NVMe-MI Commands & Effects Log Page: May Support 00:27:25.453 Data Area 4 for Telemetry Log: Not Supported 00:27:25.453 Error Log Page Entries Supported: 1 00:27:25.453 Keep Alive: Not Supported 00:27:25.453 00:27:25.453 NVM Command Set Attributes 00:27:25.453 ========================== 00:27:25.453 Submission Queue Entry Size 00:27:25.454 Max: 1 00:27:25.454 Min: 1 00:27:25.454 Completion Queue Entry Size 00:27:25.454 Max: 1 00:27:25.454 Min: 1 00:27:25.454 Number of Namespaces: 0 00:27:25.454 Compare Command: Not Supported 00:27:25.454 Write Uncorrectable Command: Not Supported 00:27:25.454 Dataset Management Command: Not Supported 00:27:25.454 Write Zeroes Command: Not Supported 00:27:25.454 Set Features Save Field: Not Supported 00:27:25.454 Reservations: Not Supported 00:27:25.454 Timestamp: Not Supported 00:27:25.454 Copy: Not Supported 00:27:25.454 Volatile Write Cache: Not Present 00:27:25.454 Atomic Write Unit (Normal): 1 00:27:25.454 Atomic Write Unit (PFail): 1 00:27:25.454 Atomic Compare & Write Unit: 1 00:27:25.454 Fused Compare & Write: Not Supported 00:27:25.454 Scatter-Gather List 00:27:25.454 SGL Command Set: Supported 00:27:25.454 SGL Keyed: Not Supported 00:27:25.454 SGL Bit Bucket Descriptor: Not Supported 00:27:25.454 SGL Metadata Pointer: Not Supported 00:27:25.454 Oversized SGL: Not Supported 00:27:25.454 SGL Metadata Address: Not Supported 00:27:25.454 SGL Offset: Supported 00:27:25.454 Transport SGL Data Block: Not Supported 00:27:25.454 Replay Protected Memory Block: Not Supported 00:27:25.454 00:27:25.454 Firmware Slot Information 00:27:25.454 ========================= 00:27:25.454 Active slot: 0 00:27:25.454 00:27:25.454 00:27:25.454 Error Log 00:27:25.454 ========= 00:27:25.454 00:27:25.454 Active Namespaces 00:27:25.454 ================= 00:27:25.454 Discovery Log Page 00:27:25.454 ================== 00:27:25.454 Generation Counter: 2 00:27:25.454 Number of Records: 2 00:27:25.454 Record Format: 0 00:27:25.454 00:27:25.454 Discovery Log Entry 0 00:27:25.454 ---------------------- 00:27:25.454 Transport Type: 3 (TCP) 00:27:25.454 Address Family: 1 (IPv4) 00:27:25.454 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:25.454 Entry Flags: 00:27:25.454 Duplicate Returned Information: 0 00:27:25.454 Explicit Persistent Connection Support for Discovery: 0 00:27:25.454 Transport Requirements: 00:27:25.454 Secure Channel: Not Specified 00:27:25.454 Port ID: 1 (0x0001) 00:27:25.454 Controller ID: 65535 (0xffff) 00:27:25.454 Admin Max SQ Size: 32 00:27:25.454 Transport Service Identifier: 4420 00:27:25.454 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:25.454 Transport Address: 10.0.0.1 00:27:25.454 Discovery Log Entry 1 00:27:25.454 ---------------------- 00:27:25.454 Transport Type: 3 (TCP) 00:27:25.454 Address Family: 1 (IPv4) 00:27:25.454 Subsystem Type: 2 (NVM Subsystem) 00:27:25.454 Entry Flags: 00:27:25.454 Duplicate Returned Information: 0 00:27:25.454 Explicit Persistent Connection Support for Discovery: 0 00:27:25.454 Transport Requirements: 00:27:25.454 Secure Channel: Not Specified 00:27:25.454 Port ID: 1 (0x0001) 00:27:25.454 Controller ID: 65535 (0xffff) 00:27:25.454 Admin Max SQ Size: 32 00:27:25.454 Transport Service Identifier: 4420 00:27:25.454 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:25.454 Transport Address: 10.0.0.1 00:27:25.454 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:25.454 get_feature(0x01) failed 00:27:25.454 get_feature(0x02) failed 00:27:25.454 get_feature(0x04) failed 00:27:25.454 ===================================================== 00:27:25.454 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:25.454 ===================================================== 00:27:25.454 Controller Capabilities/Features 00:27:25.454 ================================ 00:27:25.454 Vendor ID: 0000 00:27:25.454 Subsystem Vendor ID: 0000 00:27:25.454 Serial Number: 7e7c9aadeca407ed4cb1 00:27:25.454 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:25.454 Firmware Version: 6.8.9-20 00:27:25.454 Recommended Arb Burst: 6 00:27:25.454 IEEE OUI Identifier: 00 00 00 00:27:25.454 Multi-path I/O 00:27:25.454 May have multiple subsystem ports: Yes 00:27:25.454 May have multiple controllers: Yes 00:27:25.454 Associated with SR-IOV VF: No 00:27:25.454 Max Data Transfer Size: Unlimited 00:27:25.454 Max Number of Namespaces: 1024 00:27:25.454 Max Number of I/O Queues: 128 00:27:25.454 NVMe Specification Version (VS): 1.3 00:27:25.454 NVMe Specification Version (Identify): 1.3 00:27:25.454 Maximum Queue Entries: 1024 00:27:25.454 Contiguous Queues Required: No 00:27:25.454 Arbitration Mechanisms Supported 00:27:25.454 Weighted Round Robin: Not Supported 00:27:25.454 Vendor Specific: Not Supported 00:27:25.454 Reset Timeout: 7500 ms 00:27:25.454 Doorbell Stride: 4 bytes 00:27:25.454 NVM Subsystem Reset: Not Supported 00:27:25.454 Command Sets Supported 00:27:25.454 NVM Command Set: Supported 00:27:25.454 Boot Partition: Not Supported 00:27:25.454 Memory Page Size Minimum: 4096 bytes 00:27:25.454 Memory Page Size Maximum: 4096 bytes 00:27:25.454 Persistent Memory Region: Not Supported 00:27:25.454 Optional Asynchronous Events Supported 00:27:25.454 Namespace Attribute Notices: Supported 00:27:25.454 Firmware Activation Notices: Not Supported 00:27:25.454 ANA Change Notices: Supported 00:27:25.454 PLE Aggregate Log Change Notices: Not Supported 00:27:25.454 LBA Status Info Alert Notices: Not Supported 00:27:25.454 EGE Aggregate Log Change Notices: Not Supported 00:27:25.454 Normal NVM Subsystem Shutdown event: Not Supported 00:27:25.454 Zone Descriptor Change Notices: Not Supported 00:27:25.454 Discovery Log Change Notices: Not Supported 00:27:25.454 Controller Attributes 00:27:25.454 128-bit Host Identifier: Supported 00:27:25.454 Non-Operational Permissive Mode: Not Supported 00:27:25.454 NVM Sets: Not Supported 00:27:25.454 Read Recovery Levels: Not Supported 00:27:25.454 Endurance Groups: Not Supported 00:27:25.454 Predictable Latency Mode: Not Supported 00:27:25.454 Traffic Based Keep ALive: Supported 00:27:25.454 Namespace Granularity: Not Supported 00:27:25.454 SQ Associations: Not Supported 00:27:25.454 UUID List: Not Supported 00:27:25.454 Multi-Domain Subsystem: Not Supported 00:27:25.454 Fixed Capacity Management: Not Supported 00:27:25.454 Variable Capacity Management: Not Supported 00:27:25.454 Delete Endurance Group: Not Supported 00:27:25.454 Delete NVM Set: Not Supported 00:27:25.454 Extended LBA Formats Supported: Not Supported 00:27:25.454 Flexible Data Placement Supported: Not Supported 00:27:25.454 00:27:25.454 Controller Memory Buffer Support 00:27:25.454 ================================ 00:27:25.454 Supported: No 00:27:25.454 00:27:25.454 Persistent Memory Region Support 00:27:25.454 ================================ 00:27:25.454 Supported: No 00:27:25.454 00:27:25.454 Admin Command Set Attributes 00:27:25.454 ============================ 00:27:25.454 Security Send/Receive: Not Supported 00:27:25.454 Format NVM: Not Supported 00:27:25.454 Firmware Activate/Download: Not Supported 00:27:25.454 Namespace Management: Not Supported 00:27:25.454 Device Self-Test: Not Supported 00:27:25.454 Directives: Not Supported 00:27:25.454 NVMe-MI: Not Supported 00:27:25.454 Virtualization Management: Not Supported 00:27:25.454 Doorbell Buffer Config: Not Supported 00:27:25.454 Get LBA Status Capability: Not Supported 00:27:25.454 Command & Feature Lockdown Capability: Not Supported 00:27:25.454 Abort Command Limit: 4 00:27:25.454 Async Event Request Limit: 4 00:27:25.454 Number of Firmware Slots: N/A 00:27:25.454 Firmware Slot 1 Read-Only: N/A 00:27:25.454 Firmware Activation Without Reset: N/A 00:27:25.454 Multiple Update Detection Support: N/A 00:27:25.454 Firmware Update Granularity: No Information Provided 00:27:25.454 Per-Namespace SMART Log: Yes 00:27:25.454 Asymmetric Namespace Access Log Page: Supported 00:27:25.454 ANA Transition Time : 10 sec 00:27:25.454 00:27:25.454 Asymmetric Namespace Access Capabilities 00:27:25.454 ANA Optimized State : Supported 00:27:25.454 ANA Non-Optimized State : Supported 00:27:25.454 ANA Inaccessible State : Supported 00:27:25.454 ANA Persistent Loss State : Supported 00:27:25.454 ANA Change State : Supported 00:27:25.454 ANAGRPID is not changed : No 00:27:25.454 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:25.454 00:27:25.455 ANA Group Identifier Maximum : 128 00:27:25.455 Number of ANA Group Identifiers : 128 00:27:25.455 Max Number of Allowed Namespaces : 1024 00:27:25.455 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:25.455 Command Effects Log Page: Supported 00:27:25.455 Get Log Page Extended Data: Supported 00:27:25.455 Telemetry Log Pages: Not Supported 00:27:25.455 Persistent Event Log Pages: Not Supported 00:27:25.455 Supported Log Pages Log Page: May Support 00:27:25.455 Commands Supported & Effects Log Page: Not Supported 00:27:25.455 Feature Identifiers & Effects Log Page:May Support 00:27:25.455 NVMe-MI Commands & Effects Log Page: May Support 00:27:25.455 Data Area 4 for Telemetry Log: Not Supported 00:27:25.455 Error Log Page Entries Supported: 128 00:27:25.455 Keep Alive: Supported 00:27:25.455 Keep Alive Granularity: 1000 ms 00:27:25.455 00:27:25.455 NVM Command Set Attributes 00:27:25.455 ========================== 00:27:25.455 Submission Queue Entry Size 00:27:25.455 Max: 64 00:27:25.455 Min: 64 00:27:25.455 Completion Queue Entry Size 00:27:25.455 Max: 16 00:27:25.455 Min: 16 00:27:25.455 Number of Namespaces: 1024 00:27:25.455 Compare Command: Not Supported 00:27:25.455 Write Uncorrectable Command: Not Supported 00:27:25.455 Dataset Management Command: Supported 00:27:25.455 Write Zeroes Command: Supported 00:27:25.455 Set Features Save Field: Not Supported 00:27:25.455 Reservations: Not Supported 00:27:25.455 Timestamp: Not Supported 00:27:25.455 Copy: Not Supported 00:27:25.455 Volatile Write Cache: Present 00:27:25.455 Atomic Write Unit (Normal): 1 00:27:25.455 Atomic Write Unit (PFail): 1 00:27:25.455 Atomic Compare & Write Unit: 1 00:27:25.455 Fused Compare & Write: Not Supported 00:27:25.455 Scatter-Gather List 00:27:25.455 SGL Command Set: Supported 00:27:25.455 SGL Keyed: Not Supported 00:27:25.455 SGL Bit Bucket Descriptor: Not Supported 00:27:25.455 SGL Metadata Pointer: Not Supported 00:27:25.455 Oversized SGL: Not Supported 00:27:25.455 SGL Metadata Address: Not Supported 00:27:25.455 SGL Offset: Supported 00:27:25.455 Transport SGL Data Block: Not Supported 00:27:25.455 Replay Protected Memory Block: Not Supported 00:27:25.455 00:27:25.455 Firmware Slot Information 00:27:25.455 ========================= 00:27:25.455 Active slot: 0 00:27:25.455 00:27:25.455 Asymmetric Namespace Access 00:27:25.455 =========================== 00:27:25.455 Change Count : 0 00:27:25.455 Number of ANA Group Descriptors : 1 00:27:25.455 ANA Group Descriptor : 0 00:27:25.455 ANA Group ID : 1 00:27:25.455 Number of NSID Values : 1 00:27:25.455 Change Count : 0 00:27:25.455 ANA State : 1 00:27:25.455 Namespace Identifier : 1 00:27:25.455 00:27:25.455 Commands Supported and Effects 00:27:25.455 ============================== 00:27:25.455 Admin Commands 00:27:25.455 -------------- 00:27:25.455 Get Log Page (02h): Supported 00:27:25.455 Identify (06h): Supported 00:27:25.455 Abort (08h): Supported 00:27:25.455 Set Features (09h): Supported 00:27:25.455 Get Features (0Ah): Supported 00:27:25.455 Asynchronous Event Request (0Ch): Supported 00:27:25.455 Keep Alive (18h): Supported 00:27:25.455 I/O Commands 00:27:25.455 ------------ 00:27:25.455 Flush (00h): Supported 00:27:25.455 Write (01h): Supported LBA-Change 00:27:25.455 Read (02h): Supported 00:27:25.455 Write Zeroes (08h): Supported LBA-Change 00:27:25.455 Dataset Management (09h): Supported 00:27:25.455 00:27:25.455 Error Log 00:27:25.455 ========= 00:27:25.455 Entry: 0 00:27:25.455 Error Count: 0x3 00:27:25.455 Submission Queue Id: 0x0 00:27:25.455 Command Id: 0x5 00:27:25.455 Phase Bit: 0 00:27:25.455 Status Code: 0x2 00:27:25.455 Status Code Type: 0x0 00:27:25.455 Do Not Retry: 1 00:27:25.455 Error Location: 0x28 00:27:25.455 LBA: 0x0 00:27:25.455 Namespace: 0x0 00:27:25.455 Vendor Log Page: 0x0 00:27:25.455 ----------- 00:27:25.455 Entry: 1 00:27:25.455 Error Count: 0x2 00:27:25.455 Submission Queue Id: 0x0 00:27:25.455 Command Id: 0x5 00:27:25.455 Phase Bit: 0 00:27:25.455 Status Code: 0x2 00:27:25.455 Status Code Type: 0x0 00:27:25.455 Do Not Retry: 1 00:27:25.455 Error Location: 0x28 00:27:25.455 LBA: 0x0 00:27:25.455 Namespace: 0x0 00:27:25.455 Vendor Log Page: 0x0 00:27:25.455 ----------- 00:27:25.455 Entry: 2 00:27:25.455 Error Count: 0x1 00:27:25.455 Submission Queue Id: 0x0 00:27:25.455 Command Id: 0x4 00:27:25.455 Phase Bit: 0 00:27:25.455 Status Code: 0x2 00:27:25.455 Status Code Type: 0x0 00:27:25.455 Do Not Retry: 1 00:27:25.455 Error Location: 0x28 00:27:25.455 LBA: 0x0 00:27:25.455 Namespace: 0x0 00:27:25.455 Vendor Log Page: 0x0 00:27:25.455 00:27:25.455 Number of Queues 00:27:25.455 ================ 00:27:25.455 Number of I/O Submission Queues: 128 00:27:25.455 Number of I/O Completion Queues: 128 00:27:25.455 00:27:25.455 ZNS Specific Controller Data 00:27:25.455 ============================ 00:27:25.455 Zone Append Size Limit: 0 00:27:25.455 00:27:25.455 00:27:25.455 Active Namespaces 00:27:25.455 ================= 00:27:25.455 get_feature(0x05) failed 00:27:25.455 Namespace ID:1 00:27:25.455 Command Set Identifier: NVM (00h) 00:27:25.455 Deallocate: Supported 00:27:25.455 Deallocated/Unwritten Error: Not Supported 00:27:25.455 Deallocated Read Value: Unknown 00:27:25.455 Deallocate in Write Zeroes: Not Supported 00:27:25.455 Deallocated Guard Field: 0xFFFF 00:27:25.455 Flush: Supported 00:27:25.455 Reservation: Not Supported 00:27:25.455 Namespace Sharing Capabilities: Multiple Controllers 00:27:25.455 Size (in LBAs): 3750748848 (1788GiB) 00:27:25.455 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:25.455 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:25.455 UUID: e524f019-a794-400a-bb4f-f513ba3790e9 00:27:25.455 Thin Provisioning: Not Supported 00:27:25.455 Per-NS Atomic Units: Yes 00:27:25.455 Atomic Write Unit (Normal): 8 00:27:25.455 Atomic Write Unit (PFail): 8 00:27:25.455 Preferred Write Granularity: 8 00:27:25.455 Atomic Compare & Write Unit: 8 00:27:25.455 Atomic Boundary Size (Normal): 0 00:27:25.455 Atomic Boundary Size (PFail): 0 00:27:25.455 Atomic Boundary Offset: 0 00:27:25.455 NGUID/EUI64 Never Reused: No 00:27:25.455 ANA group ID: 1 00:27:25.455 Namespace Write Protected: No 00:27:25.455 Number of LBA Formats: 1 00:27:25.455 Current LBA Format: LBA Format #00 00:27:25.455 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:25.455 00:27:25.455 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:25.455 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:25.455 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:25.455 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:25.455 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:25.455 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:25.455 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:25.455 rmmod nvme_tcp 00:27:25.455 rmmod nvme_fabrics 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.717 11:26:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.635 11:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:27.635 11:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:27.635 11:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:27.635 11:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:27.635 11:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:27.635 11:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:27.635 11:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:27.635 11:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:27.635 11:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:27.635 11:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:27.635 11:26:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:31.844 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:31.844 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:32.106 00:27:32.106 real 0m21.237s 00:27:32.106 user 0m5.927s 00:27:32.106 sys 0m12.325s 00:27:32.106 11:26:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:32.106 11:26:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:32.106 ************************************ 00:27:32.106 END TEST nvmf_identify_kernel_target 00:27:32.106 ************************************ 00:27:32.106 11:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:32.106 11:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:32.106 11:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:32.106 11:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.106 ************************************ 00:27:32.106 START TEST nvmf_auth_host 00:27:32.106 ************************************ 00:27:32.106 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:32.408 * Looking for test storage... 00:27:32.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:32.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.408 --rc genhtml_branch_coverage=1 00:27:32.408 --rc genhtml_function_coverage=1 00:27:32.408 --rc genhtml_legend=1 00:27:32.408 --rc geninfo_all_blocks=1 00:27:32.408 --rc geninfo_unexecuted_blocks=1 00:27:32.408 00:27:32.408 ' 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:32.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.408 --rc genhtml_branch_coverage=1 00:27:32.408 --rc genhtml_function_coverage=1 00:27:32.408 --rc genhtml_legend=1 00:27:32.408 --rc geninfo_all_blocks=1 00:27:32.408 --rc geninfo_unexecuted_blocks=1 00:27:32.408 00:27:32.408 ' 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:32.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.408 --rc genhtml_branch_coverage=1 00:27:32.408 --rc genhtml_function_coverage=1 00:27:32.408 --rc genhtml_legend=1 00:27:32.408 --rc geninfo_all_blocks=1 00:27:32.408 --rc geninfo_unexecuted_blocks=1 00:27:32.408 00:27:32.408 ' 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:32.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.408 --rc genhtml_branch_coverage=1 00:27:32.408 --rc genhtml_function_coverage=1 00:27:32.408 --rc genhtml_legend=1 00:27:32.408 --rc geninfo_all_blocks=1 00:27:32.408 --rc geninfo_unexecuted_blocks=1 00:27:32.408 00:27:32.408 ' 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.408 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:32.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:32.409 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.625 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.625 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.625 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.625 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.625 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.625 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.625 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.625 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:40.626 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:40.626 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:40.626 Found net devices under 0000:31:00.0: cvl_0_0 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:40.626 Found net devices under 0000:31:00.1: cvl_0_1 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:40.626 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:40.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:27:40.627 00:27:40.627 --- 10.0.0.2 ping statistics --- 00:27:40.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.627 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:27:40.627 00:27:40.627 --- 10.0.0.1 ping statistics --- 00:27:40.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.627 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3590071 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3590071 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3590071 ']' 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.627 11:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8fdcd62f2cf398c2598c75f9121d5c5d 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.i3j 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8fdcd62f2cf398c2598c75f9121d5c5d 0 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8fdcd62f2cf398c2598c75f9121d5c5d 0 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8fdcd62f2cf398c2598c75f9121d5c5d 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.i3j 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.i3j 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.i3j 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8ea767e0d0979235038f7be5fefa318b92c59ef7f382a3e600d26682c9da6193 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.70I 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8ea767e0d0979235038f7be5fefa318b92c59ef7f382a3e600d26682c9da6193 3 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8ea767e0d0979235038f7be5fefa318b92c59ef7f382a3e600d26682c9da6193 3 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8ea767e0d0979235038f7be5fefa318b92c59ef7f382a3e600d26682c9da6193 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:41.569 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.70I 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.70I 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.70I 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=30270da327b0f6a7b31b86836a91986280100c5e019ce6bd 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.SyE 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 30270da327b0f6a7b31b86836a91986280100c5e019ce6bd 0 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 30270da327b0f6a7b31b86836a91986280100c5e019ce6bd 0 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=30270da327b0f6a7b31b86836a91986280100c5e019ce6bd 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.SyE 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.SyE 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.SyE 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:41.829 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d2c8bdd1246fbbd0bfb547110ea1a92e896c22dc954fbd72 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Xs1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d2c8bdd1246fbbd0bfb547110ea1a92e896c22dc954fbd72 2 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d2c8bdd1246fbbd0bfb547110ea1a92e896c22dc954fbd72 2 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d2c8bdd1246fbbd0bfb547110ea1a92e896c22dc954fbd72 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Xs1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Xs1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Xs1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0ab4debd63f5e30da868dc074cd341ec 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0cj 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0ab4debd63f5e30da868dc074cd341ec 1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0ab4debd63f5e30da868dc074cd341ec 1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0ab4debd63f5e30da868dc074cd341ec 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0cj 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0cj 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.0cj 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=307518706a69492c40e74076c8759416 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.PYH 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 307518706a69492c40e74076c8759416 1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 307518706a69492c40e74076c8759416 1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=307518706a69492c40e74076c8759416 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:41.830 11:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.PYH 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.PYH 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.PYH 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9b31500f4faf139d80c0fa7ecace35e0766036d47cc0c4e6 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.19V 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9b31500f4faf139d80c0fa7ecace35e0766036d47cc0c4e6 2 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9b31500f4faf139d80c0fa7ecace35e0766036d47cc0c4e6 2 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9b31500f4faf139d80c0fa7ecace35e0766036d47cc0c4e6 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.19V 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.19V 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.19V 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d1f6544c22382d16fcfb65e5f79aa6c6 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3F7 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d1f6544c22382d16fcfb65e5f79aa6c6 0 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d1f6544c22382d16fcfb65e5f79aa6c6 0 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d1f6544c22382d16fcfb65e5f79aa6c6 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3F7 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3F7 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3F7 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=db2f76517e4fc7a902dcb5c5282e6069e7c41eae3f42caaddfb352c1be656f6e 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.L8c 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key db2f76517e4fc7a902dcb5c5282e6069e7c41eae3f42caaddfb352c1be656f6e 3 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 db2f76517e4fc7a902dcb5c5282e6069e7c41eae3f42caaddfb352c1be656f6e 3 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=db2f76517e4fc7a902dcb5c5282e6069e7c41eae3f42caaddfb352c1be656f6e 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.L8c 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.L8c 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.L8c 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3590071 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3590071 ']' 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.090 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.i3j 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.70I ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.70I 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.SyE 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Xs1 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xs1 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0cj 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.PYH ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PYH 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.19V 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3F7 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3F7 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.L8c 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:42.350 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:42.610 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:42.610 11:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:45.905 Waiting for block devices as requested 00:27:45.905 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:46.166 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:46.166 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:46.166 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:46.427 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:46.427 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:46.427 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:46.688 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:46.688 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:46.950 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:46.950 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:46.950 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:46.950 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:47.212 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:47.212 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:47.212 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:47.212 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:48.156 No valid GPT data, bailing 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:48.156 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:48.419 00:27:48.419 Discovery Log Number of Records 2, Generation counter 2 00:27:48.419 =====Discovery Log Entry 0====== 00:27:48.419 trtype: tcp 00:27:48.419 adrfam: ipv4 00:27:48.419 subtype: current discovery subsystem 00:27:48.419 treq: not specified, sq flow control disable supported 00:27:48.419 portid: 1 00:27:48.419 trsvcid: 4420 00:27:48.419 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:48.419 traddr: 10.0.0.1 00:27:48.419 eflags: none 00:27:48.419 sectype: none 00:27:48.419 =====Discovery Log Entry 1====== 00:27:48.419 trtype: tcp 00:27:48.419 adrfam: ipv4 00:27:48.419 subtype: nvme subsystem 00:27:48.419 treq: not specified, sq flow control disable supported 00:27:48.419 portid: 1 00:27:48.419 trsvcid: 4420 00:27:48.419 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:48.419 traddr: 10.0.0.1 00:27:48.419 eflags: none 00:27:48.419 sectype: none 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.419 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.681 nvme0n1 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.681 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.682 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.682 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.682 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.943 nvme0n1 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.943 11:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.943 nvme0n1 00:27:48.943 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.943 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.943 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.943 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.943 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.943 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.212 nvme0n1 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.212 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.474 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.475 nvme0n1 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.475 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.735 nvme0n1 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.735 11:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.995 nvme0n1 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.995 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.256 nvme0n1 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.256 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.518 nvme0n1 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:27:50.518 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.519 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.781 nvme0n1 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.781 11:26:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.043 nvme0n1 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.043 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.616 nvme0n1 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.616 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.878 nvme0n1 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.878 11:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.139 nvme0n1 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.139 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.400 nvme0n1 00:27:52.400 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.400 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.400 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.400 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.400 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.400 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.661 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.923 nvme0n1 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.923 11:26:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.494 nvme0n1 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.494 11:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.065 nvme0n1 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.065 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.636 nvme0n1 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.636 11:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.207 nvme0n1 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.207 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.208 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.468 nvme0n1 00:27:55.469 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.469 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.469 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.469 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.469 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.469 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.730 11:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.301 nvme0n1 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.301 11:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.245 nvme0n1 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.245 11:27:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.188 nvme0n1 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.188 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.761 nvme0n1 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.761 11:27:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.703 nvme0n1 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.703 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.704 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.964 nvme0n1 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.964 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.965 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.965 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.965 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.965 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.965 11:27:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.225 nvme0n1 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.225 nvme0n1 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.225 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.226 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.487 nvme0n1 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.487 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.749 nvme0n1 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.749 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.010 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.011 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.011 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.011 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.011 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.011 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.011 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.011 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.011 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.011 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.011 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.011 11:27:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.011 nvme0n1 00:28:01.011 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.011 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.011 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.011 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.011 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.011 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.011 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.011 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.011 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.011 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.272 nvme0n1 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.272 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.534 nvme0n1 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.534 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.798 nvme0n1 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.798 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.059 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.059 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.059 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.059 11:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.059 nvme0n1 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.059 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.320 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.581 nvme0n1 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.581 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.841 nvme0n1 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:02.841 11:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.841 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.102 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.362 nvme0n1 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:03.362 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.363 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.623 nvme0n1 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.623 11:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.883 nvme0n1 00:28:03.883 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.883 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.883 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.883 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.883 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.883 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.142 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.402 nvme0n1 00:28:04.402 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.402 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.402 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.402 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.402 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.661 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.662 11:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.232 nvme0n1 00:28:05.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.801 nvme0n1 00:28:05.801 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.801 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.801 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.801 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.801 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.802 11:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.062 nvme0n1 00:28:06.062 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.062 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.062 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.062 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.062 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.062 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.323 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.894 nvme0n1 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.894 11:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.465 nvme0n1 00:28:07.466 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.466 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.466 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.466 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.466 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.466 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.727 11:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.299 nvme0n1 00:28:08.299 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.299 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.299 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.299 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.299 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.299 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.559 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.560 11:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.132 nvme0n1 00:28:09.132 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.132 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.132 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.132 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.132 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.132 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.393 11:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.964 nvme0n1 00:28:09.964 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.964 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.964 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.964 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.964 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.964 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.225 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.799 nvme0n1 00:28:10.799 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.799 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.799 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.799 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.799 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.799 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.061 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.061 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.061 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.061 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.061 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.061 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:11.061 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.061 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.061 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:11.062 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.062 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.062 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.062 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.062 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:11.062 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:11.062 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.062 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.062 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:11.062 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:28:11.062 11:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.062 nvme0n1 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.062 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.327 nvme0n1 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.327 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.328 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.589 nvme0n1 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.589 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.590 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.851 nvme0n1 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:11.851 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.852 11:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.113 nvme0n1 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.113 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.114 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.375 nvme0n1 00:28:12.375 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.375 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.375 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.375 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.375 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.375 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.375 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.375 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.376 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.638 nvme0n1 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.638 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.900 nvme0n1 00:28:12.900 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.900 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.900 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.900 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.900 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.900 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.900 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.900 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.900 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.900 11:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.900 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.162 nvme0n1 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.162 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.423 nvme0n1 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:13.423 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.424 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.996 nvme0n1 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.996 11:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.258 nvme0n1 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.258 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.520 nvme0n1 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.520 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.781 nvme0n1 00:28:14.781 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.042 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.042 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.042 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.042 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.042 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.042 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.042 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.042 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.042 11:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.042 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.303 nvme0n1 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.303 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.874 nvme0n1 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.874 11:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.452 nvme0n1 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.452 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.109 nvme0n1 00:28:17.109 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.109 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.109 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.109 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.109 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.109 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.109 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.109 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.109 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.109 11:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.109 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.379 nvme0n1 00:28:17.379 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.379 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.379 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.379 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.379 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.379 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.380 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.380 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.380 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.380 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.642 11:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.903 nvme0n1 00:28:17.903 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.903 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.903 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.903 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.903 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.903 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.903 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.903 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.903 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.903 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGZkY2Q2MmYyY2YzOThjMjU5OGM3NWY5MTIxZDVjNWQcMUgA: 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: ]] 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGVhNzY3ZTBkMDk3OTIzNTAzOGY3YmU1ZmVmYTMxOGI5MmM1OWVmN2YzODJhM2U2MDBkMjY2ODJjOWRhNjE5M8h1Xak=: 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.165 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.736 nvme0n1 00:28:18.736 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.736 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.736 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.736 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.736 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.736 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:18.996 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.997 11:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.566 nvme0n1 00:28:19.566 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.566 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.566 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.566 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.566 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.567 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.827 11:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.399 nvme0n1 00:28:20.399 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.399 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.399 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.399 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.399 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.399 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWIzMTUwMGY0ZmFmMTM5ZDgwYzBmYTdlY2FjZTM1ZTA3NjYwMzZkNDdjYzBjNGU2LpyRDw==: 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: ]] 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFmNjU0NGMyMjM4MmQxNmZjZmI2NWU1Zjc5YWE2YzZX7cgC: 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.660 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.661 11:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.233 nvme0n1 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGIyZjc2NTE3ZTRmYzdhOTAyZGNiNWM1MjgyZTYwNjllN2M0MWVhZTNmNDJjYWFkZGZiMzUyYzFiZTY1NmY2ZW0Ci+o=: 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.233 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.495 11:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.068 nvme0n1 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:22.068 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.330 request: 00:28:22.330 { 00:28:22.330 "name": "nvme0", 00:28:22.330 "trtype": "tcp", 00:28:22.330 "traddr": "10.0.0.1", 00:28:22.330 "adrfam": "ipv4", 00:28:22.330 "trsvcid": "4420", 00:28:22.330 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:22.330 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:22.330 "prchk_reftag": false, 00:28:22.330 "prchk_guard": false, 00:28:22.330 "hdgst": false, 00:28:22.330 "ddgst": false, 00:28:22.330 "allow_unrecognized_csi": false, 00:28:22.330 "method": "bdev_nvme_attach_controller", 00:28:22.330 "req_id": 1 00:28:22.330 } 00:28:22.330 Got JSON-RPC error response 00:28:22.330 response: 00:28:22.330 { 00:28:22.330 "code": -5, 00:28:22.330 "message": "Input/output error" 00:28:22.330 } 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.330 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.331 request: 00:28:22.331 { 00:28:22.331 "name": "nvme0", 00:28:22.331 "trtype": "tcp", 00:28:22.331 "traddr": "10.0.0.1", 00:28:22.331 "adrfam": "ipv4", 00:28:22.331 "trsvcid": "4420", 00:28:22.331 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:22.331 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:22.331 "prchk_reftag": false, 00:28:22.331 "prchk_guard": false, 00:28:22.331 "hdgst": false, 00:28:22.331 "ddgst": false, 00:28:22.331 "dhchap_key": "key2", 00:28:22.331 "allow_unrecognized_csi": false, 00:28:22.331 "method": "bdev_nvme_attach_controller", 00:28:22.331 "req_id": 1 00:28:22.331 } 00:28:22.331 Got JSON-RPC error response 00:28:22.331 response: 00:28:22.331 { 00:28:22.331 "code": -5, 00:28:22.331 "message": "Input/output error" 00:28:22.331 } 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.331 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.592 request: 00:28:22.592 { 00:28:22.592 "name": "nvme0", 00:28:22.592 "trtype": "tcp", 00:28:22.592 "traddr": "10.0.0.1", 00:28:22.592 "adrfam": "ipv4", 00:28:22.592 "trsvcid": "4420", 00:28:22.592 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:22.592 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:22.592 "prchk_reftag": false, 00:28:22.592 "prchk_guard": false, 00:28:22.592 "hdgst": false, 00:28:22.592 "ddgst": false, 00:28:22.592 "dhchap_key": "key1", 00:28:22.592 "dhchap_ctrlr_key": "ckey2", 00:28:22.592 "allow_unrecognized_csi": false, 00:28:22.592 "method": "bdev_nvme_attach_controller", 00:28:22.592 "req_id": 1 00:28:22.592 } 00:28:22.592 Got JSON-RPC error response 00:28:22.592 response: 00:28:22.592 { 00:28:22.592 "code": -5, 00:28:22.592 "message": "Input/output error" 00:28:22.592 } 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.592 nvme0n1 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.592 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.854 request: 00:28:22.854 { 00:28:22.854 "name": "nvme0", 00:28:22.854 "dhchap_key": "key1", 00:28:22.854 "dhchap_ctrlr_key": "ckey2", 00:28:22.854 "method": "bdev_nvme_set_keys", 00:28:22.854 "req_id": 1 00:28:22.854 } 00:28:22.854 Got JSON-RPC error response 00:28:22.854 response: 00:28:22.854 { 00:28:22.854 "code": -13, 00:28:22.854 "message": "Permission denied" 00:28:22.854 } 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:22.854 11:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:23.797 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.797 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:23.797 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.797 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.797 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.059 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:24.059 11:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:25.004 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.004 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:25.004 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.004 11:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzAyNzBkYTMyN2IwZjZhN2IzMWI4NjgzNmE5MTk4NjI4MDEwMGM1ZTAxOWNlNmJkG7nP+A==: 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: ]] 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDJjOGJkZDEyNDZmYmJkMGJmYjU0NzExMGVhMWE5MmU4OTZjMjJkYzk1NGZiZDcyfP6FNw==: 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.004 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.266 nvme0n1 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGFiNGRlYmQ2M2Y1ZTMwZGE4NjhkYzA3NGNkMzQxZWPESOia: 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: ]] 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA3NTE4NzA2YTY5NDkyYzQwZTc0MDc2Yzg3NTk0MTaCVXq3: 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.266 request: 00:28:25.266 { 00:28:25.266 "name": "nvme0", 00:28:25.266 "dhchap_key": "key2", 00:28:25.266 "dhchap_ctrlr_key": "ckey1", 00:28:25.266 "method": "bdev_nvme_set_keys", 00:28:25.266 "req_id": 1 00:28:25.266 } 00:28:25.266 Got JSON-RPC error response 00:28:25.266 response: 00:28:25.266 { 00:28:25.266 "code": -13, 00:28:25.266 "message": "Permission denied" 00:28:25.266 } 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:25.266 11:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:26.208 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.208 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:26.208 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.208 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.208 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.208 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:26.208 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:26.208 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:26.208 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:26.208 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:26.208 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:26.470 rmmod nvme_tcp 00:28:26.470 rmmod nvme_fabrics 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3590071 ']' 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3590071 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3590071 ']' 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3590071 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3590071 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3590071' 00:28:26.470 killing process with pid 3590071 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3590071 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3590071 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.470 11:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:29.017 11:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:33.223 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:33.223 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:33.223 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.i3j /tmp/spdk.key-null.SyE /tmp/spdk.key-sha256.0cj /tmp/spdk.key-sha384.19V /tmp/spdk.key-sha512.L8c /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:33.223 11:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:36.524 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:36.524 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:36.524 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:36.525 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:36.525 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:36.786 00:28:36.786 real 1m4.592s 00:28:36.786 user 0m57.140s 00:28:36.786 sys 0m17.016s 00:28:36.786 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.786 11:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.786 ************************************ 00:28:36.786 END TEST nvmf_auth_host 00:28:36.786 ************************************ 00:28:36.786 11:27:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:36.786 11:27:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:36.786 11:27:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:36.786 11:27:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.786 11:27:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.786 ************************************ 00:28:36.786 START TEST nvmf_digest 00:28:36.786 ************************************ 00:28:36.786 11:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:37.047 * Looking for test storage... 00:28:37.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.047 11:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:37.047 11:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:28:37.047 11:27:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:37.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.047 --rc genhtml_branch_coverage=1 00:28:37.047 --rc genhtml_function_coverage=1 00:28:37.047 --rc genhtml_legend=1 00:28:37.047 --rc geninfo_all_blocks=1 00:28:37.047 --rc geninfo_unexecuted_blocks=1 00:28:37.047 00:28:37.047 ' 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:37.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.047 --rc genhtml_branch_coverage=1 00:28:37.047 --rc genhtml_function_coverage=1 00:28:37.047 --rc genhtml_legend=1 00:28:37.047 --rc geninfo_all_blocks=1 00:28:37.047 --rc geninfo_unexecuted_blocks=1 00:28:37.047 00:28:37.047 ' 00:28:37.047 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:37.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.047 --rc genhtml_branch_coverage=1 00:28:37.047 --rc genhtml_function_coverage=1 00:28:37.048 --rc genhtml_legend=1 00:28:37.048 --rc geninfo_all_blocks=1 00:28:37.048 --rc geninfo_unexecuted_blocks=1 00:28:37.048 00:28:37.048 ' 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.048 --rc genhtml_branch_coverage=1 00:28:37.048 --rc genhtml_function_coverage=1 00:28:37.048 --rc genhtml_legend=1 00:28:37.048 --rc geninfo_all_blocks=1 00:28:37.048 --rc geninfo_unexecuted_blocks=1 00:28:37.048 00:28:37.048 ' 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:37.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:37.048 11:27:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:45.203 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:45.203 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:45.203 Found net devices under 0000:31:00.0: cvl_0_0 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:45.203 Found net devices under 0000:31:00.1: cvl_0_1 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.203 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:45.204 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.465 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.465 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.465 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:45.465 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:45.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:28:45.466 00:28:45.466 --- 10.0.0.2 ping statistics --- 00:28:45.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.466 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:28:45.466 00:28:45.466 --- 10.0.0.1 ping statistics --- 00:28:45.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.466 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:45.466 ************************************ 00:28:45.466 START TEST nvmf_digest_clean 00:28:45.466 ************************************ 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3609201 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3609201 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3609201 ']' 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.466 11:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:45.466 [2024-12-06 11:27:51.597903] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:28:45.466 [2024-12-06 11:27:51.597951] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.728 [2024-12-06 11:27:51.685023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.728 [2024-12-06 11:27:51.719335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.728 [2024-12-06 11:27:51.719367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.728 [2024-12-06 11:27:51.719375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.728 [2024-12-06 11:27:51.719382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.728 [2024-12-06 11:27:51.719388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.728 [2024-12-06 11:27:51.719965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.301 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.301 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:46.301 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.301 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.301 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:46.301 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.301 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:46.301 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:46.301 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:46.301 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.301 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:46.562 null0 00:28:46.562 [2024-12-06 11:27:52.501667] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.562 [2024-12-06 11:27:52.525875] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3609256 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3609256 /var/tmp/bperf.sock 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3609256 ']' 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.562 11:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:46.562 [2024-12-06 11:27:52.584037] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:28:46.562 [2024-12-06 11:27:52.584086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609256 ] 00:28:46.562 [2024-12-06 11:27:52.679553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.562 [2024-12-06 11:27:52.715607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.505 11:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.505 11:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:47.505 11:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:47.505 11:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:47.505 11:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:47.505 11:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.505 11:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.076 nvme0n1 00:28:48.077 11:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:48.077 11:27:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.077 Running I/O for 2 seconds... 00:28:49.963 19147.00 IOPS, 74.79 MiB/s [2024-12-06T10:27:56.130Z] 19359.50 IOPS, 75.62 MiB/s 00:28:49.963 Latency(us) 00:28:49.963 [2024-12-06T10:27:56.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.963 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:49.963 nvme0n1 : 2.00 19384.89 75.72 0.00 0.00 6594.99 2880.85 20534.61 00:28:49.963 [2024-12-06T10:27:56.130Z] =================================================================================================================== 00:28:49.963 [2024-12-06T10:27:56.130Z] Total : 19384.89 75.72 0.00 0.00 6594.99 2880.85 20534.61 00:28:49.963 { 00:28:49.963 "results": [ 00:28:49.963 { 00:28:49.963 "job": "nvme0n1", 00:28:49.963 "core_mask": "0x2", 00:28:49.963 "workload": "randread", 00:28:49.963 "status": "finished", 00:28:49.963 "queue_depth": 128, 00:28:49.963 "io_size": 4096, 00:28:49.963 "runtime": 2.003983, 00:28:49.963 "iops": 19384.894981644054, 00:28:49.963 "mibps": 75.72224602204709, 00:28:49.963 "io_failed": 0, 00:28:49.963 "io_timeout": 0, 00:28:49.963 "avg_latency_us": 6594.986677649926, 00:28:49.963 "min_latency_us": 2880.8533333333335, 00:28:49.963 "max_latency_us": 20534.613333333335 00:28:49.963 } 00:28:49.963 ], 00:28:49.963 "core_count": 1 00:28:49.963 } 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:50.224 | select(.opcode=="crc32c") 00:28:50.224 | "\(.module_name) \(.executed)"' 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3609256 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3609256 ']' 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3609256 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609256 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609256' 00:28:50.224 killing process with pid 3609256 00:28:50.224 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3609256 00:28:50.224 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.224 00:28:50.224 Latency(us) 00:28:50.224 [2024-12-06T10:27:56.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.224 [2024-12-06T10:27:56.391Z] =================================================================================================================== 00:28:50.224 [2024-12-06T10:27:56.391Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3609256 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3610126 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3610126 /var/tmp/bperf.sock 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3610126 ']' 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:50.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.486 11:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:50.486 [2024-12-06 11:27:56.545154] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:28:50.486 [2024-12-06 11:27:56.545212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610126 ] 00:28:50.486 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:50.486 Zero copy mechanism will not be used. 00:28:50.486 [2024-12-06 11:27:56.635864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.747 [2024-12-06 11:27:56.665658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.318 11:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.318 11:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:51.318 11:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:51.318 11:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:51.318 11:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:51.612 11:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.612 11:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.871 nvme0n1 00:28:51.871 11:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:51.871 11:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.131 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.131 Zero copy mechanism will not be used. 00:28:52.131 Running I/O for 2 seconds... 00:28:54.009 5816.00 IOPS, 727.00 MiB/s [2024-12-06T10:28:00.176Z] 5868.50 IOPS, 733.56 MiB/s 00:28:54.009 Latency(us) 00:28:54.009 [2024-12-06T10:28:00.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.009 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:54.009 nvme0n1 : 2.01 5861.81 732.73 0.00 0.00 2726.68 505.17 10048.85 00:28:54.009 [2024-12-06T10:28:00.176Z] =================================================================================================================== 00:28:54.009 [2024-12-06T10:28:00.176Z] Total : 5861.81 732.73 0.00 0.00 2726.68 505.17 10048.85 00:28:54.009 { 00:28:54.009 "results": [ 00:28:54.009 { 00:28:54.009 "job": "nvme0n1", 00:28:54.009 "core_mask": "0x2", 00:28:54.009 "workload": "randread", 00:28:54.009 "status": "finished", 00:28:54.009 "queue_depth": 16, 00:28:54.009 "io_size": 131072, 00:28:54.009 "runtime": 2.005352, 00:28:54.009 "iops": 5861.81378630784, 00:28:54.009 "mibps": 732.72672328848, 00:28:54.009 "io_failed": 0, 00:28:54.009 "io_timeout": 0, 00:28:54.009 "avg_latency_us": 2726.684202467035, 00:28:54.009 "min_latency_us": 505.17333333333335, 00:28:54.009 "max_latency_us": 10048.853333333333 00:28:54.009 } 00:28:54.009 ], 00:28:54.009 "core_count": 1 00:28:54.009 } 00:28:54.009 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:54.009 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:54.009 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:54.009 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:54.009 | select(.opcode=="crc32c") 00:28:54.009 | "\(.module_name) \(.executed)"' 00:28:54.009 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3610126 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3610126 ']' 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3610126 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3610126 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3610126' 00:28:54.270 killing process with pid 3610126 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3610126 00:28:54.270 Received shutdown signal, test time was about 2.000000 seconds 00:28:54.270 00:28:54.270 Latency(us) 00:28:54.270 [2024-12-06T10:28:00.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.270 [2024-12-06T10:28:00.437Z] =================================================================================================================== 00:28:54.270 [2024-12-06T10:28:00.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3610126 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:54.270 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3610926 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3610926 /var/tmp/bperf.sock 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3610926 ']' 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:54.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.531 11:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:54.531 [2024-12-06 11:28:00.485954] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:28:54.531 [2024-12-06 11:28:00.486009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610926 ] 00:28:54.531 [2024-12-06 11:28:00.573960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.531 [2024-12-06 11:28:00.602170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.476 11:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.476 11:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:55.476 11:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:55.476 11:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:55.476 11:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:55.476 11:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.476 11:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.737 nvme0n1 00:28:55.737 11:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:55.737 11:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:55.998 Running I/O for 2 seconds... 00:28:57.883 21710.00 IOPS, 84.80 MiB/s [2024-12-06T10:28:04.050Z] 21732.50 IOPS, 84.89 MiB/s 00:28:57.883 Latency(us) 00:28:57.883 [2024-12-06T10:28:04.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.883 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:57.883 nvme0n1 : 2.00 21767.66 85.03 0.00 0.00 5875.07 2116.27 10212.69 00:28:57.883 [2024-12-06T10:28:04.050Z] =================================================================================================================== 00:28:57.883 [2024-12-06T10:28:04.050Z] Total : 21767.66 85.03 0.00 0.00 5875.07 2116.27 10212.69 00:28:57.883 { 00:28:57.883 "results": [ 00:28:57.883 { 00:28:57.883 "job": "nvme0n1", 00:28:57.883 "core_mask": "0x2", 00:28:57.883 "workload": "randwrite", 00:28:57.883 "status": "finished", 00:28:57.883 "queue_depth": 128, 00:28:57.883 "io_size": 4096, 00:28:57.883 "runtime": 2.00265, 00:28:57.883 "iops": 21767.657853344317, 00:28:57.883 "mibps": 85.02991348962624, 00:28:57.883 "io_failed": 0, 00:28:57.883 "io_timeout": 0, 00:28:57.883 "avg_latency_us": 5875.073212671759, 00:28:57.883 "min_latency_us": 2116.266666666667, 00:28:57.883 "max_latency_us": 10212.693333333333 00:28:57.883 } 00:28:57.883 ], 00:28:57.883 "core_count": 1 00:28:57.883 } 00:28:57.883 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:57.883 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:57.883 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:57.883 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:57.883 11:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:57.883 | select(.opcode=="crc32c") 00:28:57.883 | "\(.module_name) \(.executed)"' 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3610926 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3610926 ']' 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3610926 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3610926 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3610926' 00:28:58.144 killing process with pid 3610926 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3610926 00:28:58.144 Received shutdown signal, test time was about 2.000000 seconds 00:28:58.144 00:28:58.144 Latency(us) 00:28:58.144 [2024-12-06T10:28:04.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.144 [2024-12-06T10:28:04.311Z] =================================================================================================================== 00:28:58.144 [2024-12-06T10:28:04.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3610926 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3611607 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3611607 /var/tmp/bperf.sock 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3611607 ']' 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.144 11:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.405 [2024-12-06 11:28:04.347326] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:28:58.405 [2024-12-06 11:28:04.347385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3611607 ] 00:28:58.405 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:58.405 Zero copy mechanism will not be used. 00:28:58.405 [2024-12-06 11:28:04.437905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.405 [2024-12-06 11:28:04.466096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.977 11:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.977 11:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:58.977 11:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:58.977 11:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:58.977 11:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:59.237 11:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.237 11:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.497 nvme0n1 00:28:59.497 11:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:59.497 11:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:59.757 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:59.757 Zero copy mechanism will not be used. 00:28:59.757 Running I/O for 2 seconds... 00:29:01.637 3541.00 IOPS, 442.62 MiB/s [2024-12-06T10:28:07.804Z] 3521.50 IOPS, 440.19 MiB/s 00:29:01.637 Latency(us) 00:29:01.637 [2024-12-06T10:28:07.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.637 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:01.637 nvme0n1 : 2.01 3519.55 439.94 0.00 0.00 4537.66 1925.12 10704.21 00:29:01.637 [2024-12-06T10:28:07.805Z] =================================================================================================================== 00:29:01.638 [2024-12-06T10:28:07.805Z] Total : 3519.55 439.94 0.00 0.00 4537.66 1925.12 10704.21 00:29:01.638 { 00:29:01.638 "results": [ 00:29:01.638 { 00:29:01.638 "job": "nvme0n1", 00:29:01.638 "core_mask": "0x2", 00:29:01.638 "workload": "randwrite", 00:29:01.638 "status": "finished", 00:29:01.638 "queue_depth": 16, 00:29:01.638 "io_size": 131072, 00:29:01.638 "runtime": 2.006504, 00:29:01.638 "iops": 3519.5544090617313, 00:29:01.638 "mibps": 439.9443011327164, 00:29:01.638 "io_failed": 0, 00:29:01.638 "io_timeout": 0, 00:29:01.638 "avg_latency_us": 4537.655946379685, 00:29:01.638 "min_latency_us": 1925.12, 00:29:01.638 "max_latency_us": 10704.213333333333 00:29:01.638 } 00:29:01.638 ], 00:29:01.638 "core_count": 1 00:29:01.638 } 00:29:01.638 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:01.638 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:01.638 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:01.638 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:01.638 | select(.opcode=="crc32c") 00:29:01.638 | "\(.module_name) \(.executed)"' 00:29:01.638 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3611607 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3611607 ']' 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3611607 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3611607 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3611607' 00:29:01.899 killing process with pid 3611607 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3611607 00:29:01.899 Received shutdown signal, test time was about 2.000000 seconds 00:29:01.899 00:29:01.899 Latency(us) 00:29:01.899 [2024-12-06T10:28:08.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.899 [2024-12-06T10:28:08.066Z] =================================================================================================================== 00:29:01.899 [2024-12-06T10:28:08.066Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.899 11:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3611607 00:29:01.899 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3609201 00:29:01.899 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3609201 ']' 00:29:01.899 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3609201 00:29:01.899 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:01.899 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.899 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609201 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609201' 00:29:02.160 killing process with pid 3609201 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3609201 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3609201 00:29:02.160 00:29:02.160 real 0m16.715s 00:29:02.160 user 0m33.107s 00:29:02.160 sys 0m3.523s 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:02.160 ************************************ 00:29:02.160 END TEST nvmf_digest_clean 00:29:02.160 ************************************ 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:02.160 ************************************ 00:29:02.160 START TEST nvmf_digest_error 00:29:02.160 ************************************ 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:02.160 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.420 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3612337 00:29:02.421 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3612337 00:29:02.421 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:02.421 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3612337 ']' 00:29:02.421 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.421 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.421 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.421 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.421 11:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.421 [2024-12-06 11:28:08.391238] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:29:02.421 [2024-12-06 11:28:08.391287] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.421 [2024-12-06 11:28:08.480454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.421 [2024-12-06 11:28:08.514813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.421 [2024-12-06 11:28:08.514848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.421 [2024-12-06 11:28:08.514860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.421 [2024-12-06 11:28:08.514873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.421 [2024-12-06 11:28:08.514879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.421 [2024-12-06 11:28:08.515440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.363 [2024-12-06 11:28:09.217447] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.363 null0 00:29:03.363 [2024-12-06 11:28:09.300926] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.363 [2024-12-06 11:28:09.325129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3612670 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3612670 /var/tmp/bperf.sock 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3612670 ']' 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:03.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.363 11:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.363 [2024-12-06 11:28:09.384432] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:29:03.363 [2024-12-06 11:28:09.384481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612670 ] 00:29:03.363 [2024-12-06 11:28:09.472843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.363 [2024-12-06 11:28:09.502740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.304 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.304 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:04.304 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:04.304 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:04.304 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:04.304 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.304 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.304 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.304 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.304 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.567 nvme0n1 00:29:04.567 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:04.567 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.567 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.567 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.567 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:04.567 11:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:04.828 Running I/O for 2 seconds... 00:29:04.828 [2024-12-06 11:28:10.792384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.792418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.792427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.805972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.805994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.806001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.819284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.819304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.819311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.832348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.832367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.832374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.845062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.845081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.845088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.857355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.857373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.857380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.869783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.869801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.869808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.882445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.882463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.882469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.893547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.893566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.893572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.907231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.907249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.907256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.920027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.920044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.920055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.932610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.932628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.828 [2024-12-06 11:28:10.932635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.828 [2024-12-06 11:28:10.945086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.828 [2024-12-06 11:28:10.945105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.829 [2024-12-06 11:28:10.945113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.829 [2024-12-06 11:28:10.958193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.829 [2024-12-06 11:28:10.958212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.829 [2024-12-06 11:28:10.958218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.829 [2024-12-06 11:28:10.968465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.829 [2024-12-06 11:28:10.968483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.829 [2024-12-06 11:28:10.968489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.829 [2024-12-06 11:28:10.981932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:04.829 [2024-12-06 11:28:10.981950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.829 [2024-12-06 11:28:10.981957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:10.995579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:10.995597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:10.995604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.008598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.008616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.008623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.020175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.020192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.020199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.032924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.032945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.032952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.045542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.045560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.045567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.056829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.056847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.056854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.071152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.071171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.071177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.083100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.083118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.083124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.096020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.096039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.096047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.108922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.108939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.108946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.121382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.121400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.121406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.132209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.132225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.132235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.144620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.144638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.090 [2024-12-06 11:28:11.144644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.090 [2024-12-06 11:28:11.157636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.090 [2024-12-06 11:28:11.157653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.091 [2024-12-06 11:28:11.157660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.091 [2024-12-06 11:28:11.169752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.091 [2024-12-06 11:28:11.169770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.091 [2024-12-06 11:28:11.169777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.091 [2024-12-06 11:28:11.182418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.091 [2024-12-06 11:28:11.182436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.091 [2024-12-06 11:28:11.182443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.091 [2024-12-06 11:28:11.195819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.091 [2024-12-06 11:28:11.195837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.091 [2024-12-06 11:28:11.195843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.091 [2024-12-06 11:28:11.207211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.091 [2024-12-06 11:28:11.207228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.091 [2024-12-06 11:28:11.207235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.091 [2024-12-06 11:28:11.219292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.091 [2024-12-06 11:28:11.219309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.091 [2024-12-06 11:28:11.219316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.091 [2024-12-06 11:28:11.231710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.091 [2024-12-06 11:28:11.231727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.091 [2024-12-06 11:28:11.231734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.091 [2024-12-06 11:28:11.245258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.091 [2024-12-06 11:28:11.245279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.091 [2024-12-06 11:28:11.245286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.353 [2024-12-06 11:28:11.258786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.353 [2024-12-06 11:28:11.258804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.353 [2024-12-06 11:28:11.258810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.353 [2024-12-06 11:28:11.270680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.353 [2024-12-06 11:28:11.270697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.353 [2024-12-06 11:28:11.270704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.353 [2024-12-06 11:28:11.282520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.353 [2024-12-06 11:28:11.282538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.353 [2024-12-06 11:28:11.282544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.353 [2024-12-06 11:28:11.296092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.353 [2024-12-06 11:28:11.296110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.353 [2024-12-06 11:28:11.296117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.353 [2024-12-06 11:28:11.309044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.353 [2024-12-06 11:28:11.309062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.353 [2024-12-06 11:28:11.309068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.353 [2024-12-06 11:28:11.320187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.353 [2024-12-06 11:28:11.320204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.353 [2024-12-06 11:28:11.320211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.333298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.333315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.333322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.345603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.345620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.345626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.357106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.357124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.357131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.369413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.369431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.369437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.382572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.382590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.382596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.394019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.394037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.394043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.406321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.406338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.406345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.420001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.420019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.420025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.432703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.432720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.432726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.446634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.446652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.446658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.460977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.460994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.461004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.471347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.471365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.471371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.484124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.484141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.484148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.496226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.496243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.496250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.354 [2024-12-06 11:28:11.509540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.354 [2024-12-06 11:28:11.509558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.354 [2024-12-06 11:28:11.509565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.522198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.522215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.522222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.535444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.535462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.535468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.546857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.546877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.546884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.558271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.558288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.558295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.570977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.570997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.571004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.584880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.584898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.584904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.598623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.598642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.598648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.610275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.610293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.610299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.622222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.622240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.622247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.633330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.633347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.633354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.647233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.647250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.647257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.659167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.659184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.659191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.671264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.671281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.671287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.685219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.685236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.685243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.698222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.698239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.698246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.710349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.710366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.710373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.721737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.721754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.721761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.734032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.734050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.734056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.744881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.744898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.744905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 [2024-12-06 11:28:11.758074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.758091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.758098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.616 20100.00 IOPS, 78.52 MiB/s [2024-12-06T10:28:11.783Z] [2024-12-06 11:28:11.772356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.616 [2024-12-06 11:28:11.772373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.616 [2024-12-06 11:28:11.772379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.784969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.784990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.784998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.796231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.796249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.796256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.808835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.808852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.808859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.821055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.821073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.821080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.833759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.833777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.833785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.845692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.845709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.845716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.858219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.858237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.858243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.871996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.872014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.872021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.884250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.884268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.884275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.895913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.895931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.895938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.909532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.909550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.909557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.922876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.922893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.922900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.932820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.932838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.932844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.945728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.945746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.945753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.959433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.959451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.959458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.973146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.973163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.973170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.983469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.983487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.983494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:11.996194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:11.996212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.879 [2024-12-06 11:28:11.996221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.879 [2024-12-06 11:28:12.009345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.879 [2024-12-06 11:28:12.009362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.880 [2024-12-06 11:28:12.009369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.880 [2024-12-06 11:28:12.022129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.880 [2024-12-06 11:28:12.022146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.880 [2024-12-06 11:28:12.022153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.880 [2024-12-06 11:28:12.036327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:05.880 [2024-12-06 11:28:12.036346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.880 [2024-12-06 11:28:12.036352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.143 [2024-12-06 11:28:12.048866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.143 [2024-12-06 11:28:12.048883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.143 [2024-12-06 11:28:12.048891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.143 [2024-12-06 11:28:12.058657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.143 [2024-12-06 11:28:12.058674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.143 [2024-12-06 11:28:12.058680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.143 [2024-12-06 11:28:12.072939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.072958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.072964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.086399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.086417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.086424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.098829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.098847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.098854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.110915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.110936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.110943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.121187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.121207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.121213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.135219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.135237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.135244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.149146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.149164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.149171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.162500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.162518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.162525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.174602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.174620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.174627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.185719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.185737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.185743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.198798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.198816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.198823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.211715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.211733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.211740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.223860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.223880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.223888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.235163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.235180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.235187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.247465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.247483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.247490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.260505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.260522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.260529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.272657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.272675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.272681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.286212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.286230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.286237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.144 [2024-12-06 11:28:12.296147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.144 [2024-12-06 11:28:12.296164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.144 [2024-12-06 11:28:12.296171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.412 [2024-12-06 11:28:12.310082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.412 [2024-12-06 11:28:12.310099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.412 [2024-12-06 11:28:12.310106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.412 [2024-12-06 11:28:12.322405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.412 [2024-12-06 11:28:12.322427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.412 [2024-12-06 11:28:12.322434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.412 [2024-12-06 11:28:12.335780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.412 [2024-12-06 11:28:12.335798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.412 [2024-12-06 11:28:12.335805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.412 [2024-12-06 11:28:12.347847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.412 [2024-12-06 11:28:12.347870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.412 [2024-12-06 11:28:12.347877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.412 [2024-12-06 11:28:12.359824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.412 [2024-12-06 11:28:12.359842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.412 [2024-12-06 11:28:12.359849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.412 [2024-12-06 11:28:12.373372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.412 [2024-12-06 11:28:12.373389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.412 [2024-12-06 11:28:12.373395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.412 [2024-12-06 11:28:12.385486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.412 [2024-12-06 11:28:12.385505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.412 [2024-12-06 11:28:12.385511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.412 [2024-12-06 11:28:12.398997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.412 [2024-12-06 11:28:12.399015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.412 [2024-12-06 11:28:12.399023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.412 [2024-12-06 11:28:12.411550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.412 [2024-12-06 11:28:12.411567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.412 [2024-12-06 11:28:12.411574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.412 [2024-12-06 11:28:12.422638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.412 [2024-12-06 11:28:12.422656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.412 [2024-12-06 11:28:12.422662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.412 [2024-12-06 11:28:12.434865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.413 [2024-12-06 11:28:12.434883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.413 [2024-12-06 11:28:12.434890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.413 [2024-12-06 11:28:12.448234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.413 [2024-12-06 11:28:12.448252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.413 [2024-12-06 11:28:12.448258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.413 [2024-12-06 11:28:12.461224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.413 [2024-12-06 11:28:12.461241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.413 [2024-12-06 11:28:12.461248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.413 [2024-12-06 11:28:12.473521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.413 [2024-12-06 11:28:12.473539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.413 [2024-12-06 11:28:12.473546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.413 [2024-12-06 11:28:12.484907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.413 [2024-12-06 11:28:12.484925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.413 [2024-12-06 11:28:12.484932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.413 [2024-12-06 11:28:12.498156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.413 [2024-12-06 11:28:12.498174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.413 [2024-12-06 11:28:12.498181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.413 [2024-12-06 11:28:12.511734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.413 [2024-12-06 11:28:12.511753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.413 [2024-12-06 11:28:12.511760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.413 [2024-12-06 11:28:12.524598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.413 [2024-12-06 11:28:12.524615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.413 [2024-12-06 11:28:12.524622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.413 [2024-12-06 11:28:12.535029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.413 [2024-12-06 11:28:12.535047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.413 [2024-12-06 11:28:12.535057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.413 [2024-12-06 11:28:12.547655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.413 [2024-12-06 11:28:12.547672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.413 [2024-12-06 11:28:12.547679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.413 [2024-12-06 11:28:12.560481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.413 [2024-12-06 11:28:12.560499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.413 [2024-12-06 11:28:12.560505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.573484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.573503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.573510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.587076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.587094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.587101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.596496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.596514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.596521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.611911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.611929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.611936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.623262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.623279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.623285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.636002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.636019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.636026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.649068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.649089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.649096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.662478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.662496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.662503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.675299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.675317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.675324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.685922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.685940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.685946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.698473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.698490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.698497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.712763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.712781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.712787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.725184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.725202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.725209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.738080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.738097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.738104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.748720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.748737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.748744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 [2024-12-06 11:28:12.762469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.762487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.762493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 20231.00 IOPS, 79.03 MiB/s [2024-12-06T10:28:12.926Z] [2024-12-06 11:28:12.774115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1059680) 00:29:06.759 [2024-12-06 11:28:12.774133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.759 [2024-12-06 11:28:12.774140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.759 00:29:06.759 Latency(us) 00:29:06.759 [2024-12-06T10:28:12.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.759 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:06.759 nvme0n1 : 2.00 20257.16 79.13 0.00 0.00 6312.23 2266.45 22391.47 00:29:06.759 [2024-12-06T10:28:12.926Z] =================================================================================================================== 00:29:06.759 [2024-12-06T10:28:12.926Z] Total : 20257.16 79.13 0.00 0.00 6312.23 2266.45 22391.47 00:29:06.759 { 00:29:06.759 "results": [ 00:29:06.759 { 00:29:06.759 "job": "nvme0n1", 00:29:06.759 "core_mask": "0x2", 00:29:06.759 "workload": "randread", 00:29:06.759 "status": "finished", 00:29:06.759 "queue_depth": 128, 00:29:06.759 "io_size": 4096, 00:29:06.759 "runtime": 2.004279, 00:29:06.759 "iops": 20257.159806593794, 00:29:06.759 "mibps": 79.12953049450701, 00:29:06.759 "io_failed": 0, 00:29:06.759 "io_timeout": 0, 00:29:06.759 "avg_latency_us": 6312.228967759414, 00:29:06.759 "min_latency_us": 2266.4533333333334, 00:29:06.759 "max_latency_us": 22391.466666666667 00:29:06.759 } 00:29:06.759 ], 00:29:06.759 "core_count": 1 00:29:06.759 } 00:29:06.759 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:06.759 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:06.759 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:06.759 | .driver_specific 00:29:06.759 | .nvme_error 00:29:06.760 | .status_code 00:29:06.760 | .command_transient_transport_error' 00:29:06.760 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:07.119 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:29:07.119 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3612670 00:29:07.119 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3612670 ']' 00:29:07.119 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3612670 00:29:07.119 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:07.119 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.119 11:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3612670 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3612670' 00:29:07.119 killing process with pid 3612670 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3612670 00:29:07.119 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.119 00:29:07.119 Latency(us) 00:29:07.119 [2024-12-06T10:28:13.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.119 [2024-12-06T10:28:13.286Z] =================================================================================================================== 00:29:07.119 [2024-12-06T10:28:13.286Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3612670 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3613361 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3613361 /var/tmp/bperf.sock 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3613361 ']' 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:07.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.119 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.119 [2024-12-06 11:28:13.210313] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:29:07.119 [2024-12-06 11:28:13.210366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613361 ] 00:29:07.119 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:07.119 Zero copy mechanism will not be used. 00:29:07.382 [2024-12-06 11:28:13.300241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.382 [2024-12-06 11:28:13.328012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.954 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.954 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:07.954 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:07.954 11:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:08.215 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:08.215 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.215 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.215 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.215 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.215 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.476 nvme0n1 00:29:08.476 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:08.476 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.476 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.476 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.476 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:08.476 11:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.738 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:08.738 Zero copy mechanism will not be used. 00:29:08.738 Running I/O for 2 seconds... 00:29:08.738 [2024-12-06 11:28:14.680433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.680467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.680476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.692076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.692101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.692109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.703710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.703734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.703741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.713037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.713057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.713064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.725467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.725487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.725493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.734681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.734708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.734715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.746448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.746467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.746473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.758716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.758735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.758741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.770849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.770874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.770881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.780733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.780752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.780759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.792330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.792349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.792356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.801844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.801868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.801874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.812626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.812645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.812652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.823611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.823630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.823637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.834321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.834340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.738 [2024-12-06 11:28:14.834347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:08.738 [2024-12-06 11:28:14.846210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.738 [2024-12-06 11:28:14.846229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.739 [2024-12-06 11:28:14.846236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.739 [2024-12-06 11:28:14.856245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.739 [2024-12-06 11:28:14.856264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.739 [2024-12-06 11:28:14.856271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:08.739 [2024-12-06 11:28:14.865946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.739 [2024-12-06 11:28:14.865965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.739 [2024-12-06 11:28:14.865971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:08.739 [2024-12-06 11:28:14.875704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.739 [2024-12-06 11:28:14.875723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.739 [2024-12-06 11:28:14.875730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:08.739 [2024-12-06 11:28:14.886820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.739 [2024-12-06 11:28:14.886839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.739 [2024-12-06 11:28:14.886846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.739 [2024-12-06 11:28:14.898540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:08.739 [2024-12-06 11:28:14.898558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.739 [2024-12-06 11:28:14.898565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:14.908590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:14.908609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:14.908616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:14.918662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:14.918681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:14.918691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:14.927885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:14.927904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:14.927911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:14.937192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:14.937211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:14.937218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:14.948346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:14.948366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:14.948372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:14.958159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:14.958178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:14.958185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:14.967005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:14.967024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:14.967030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:14.977089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:14.977107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:14.977114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:14.985644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:14.985663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:14.985669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:14.995744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:14.995763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:14.995770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:15.005769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:15.005791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:15.005798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:15.013107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:15.013125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:15.013132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:15.023308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:15.023326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:15.023333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:15.033767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.001 [2024-12-06 11:28:15.033786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.001 [2024-12-06 11:28:15.033792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.001 [2024-12-06 11:28:15.045054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.045072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.045079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.055014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.055033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.055040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.063652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.063671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.063677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.072185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.072204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.072210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.083543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.083562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.083569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.090794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.090812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.090819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.096006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.096024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.096031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.104095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.104114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.104120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.113819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.113838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.113845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.124068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.124087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.124094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.132693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.132711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.132718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.144123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.144142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.144149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.152589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.152608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.152615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.002 [2024-12-06 11:28:15.162045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.002 [2024-12-06 11:28:15.162064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.002 [2024-12-06 11:28:15.162073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.171203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.171222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.171230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.181637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.181657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.181663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.190773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.190792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.190798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.200026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.200045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.200052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.211661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.211681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.211687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.218424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.218444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.218451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.224492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.224511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.224518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.230363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.230382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.230389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.236006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.236025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.236031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.243466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.243485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.243492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.253141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.253159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.253166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.261364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.261383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.261389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.272240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.272259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.272266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.281311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.281329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.281336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.290150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.290168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.263 [2024-12-06 11:28:15.290175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.263 [2024-12-06 11:28:15.301211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.263 [2024-12-06 11:28:15.301229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.301236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.311833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.311853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.311869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.322492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.322511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.322518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.331703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.331722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.331728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.341048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.341067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.341074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.349186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.349206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.349212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.357243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.357262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.357269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.367454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.367473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.367480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.377906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.377925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.377932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.385858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.385882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.385889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.396325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.396349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.396355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.404889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.404908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.404916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.414662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.414681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.414688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.264 [2024-12-06 11:28:15.425359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.264 [2024-12-06 11:28:15.425378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.264 [2024-12-06 11:28:15.425385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.436928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.436947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.436953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.449023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.449043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.449050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.462074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.462093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.462100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.471065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.471084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.471090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.482258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.482277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.482284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.488231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.488250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.488257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.496671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.496690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.496697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.506527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.506546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.506553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.512312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.512331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.512339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.522609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.522628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.522635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.534706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.534726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.534733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.544635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.544654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.544661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.554689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.554708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.554715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.566587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.566607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.566617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.576175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.524 [2024-12-06 11:28:15.576195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.524 [2024-12-06 11:28:15.576201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.524 [2024-12-06 11:28:15.587726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.525 [2024-12-06 11:28:15.587746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.525 [2024-12-06 11:28:15.587752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.525 [2024-12-06 11:28:15.597595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.525 [2024-12-06 11:28:15.597615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.525 [2024-12-06 11:28:15.597622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.525 [2024-12-06 11:28:15.608183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.525 [2024-12-06 11:28:15.608202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.525 [2024-12-06 11:28:15.608209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.525 [2024-12-06 11:28:15.616933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.525 [2024-12-06 11:28:15.616952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.525 [2024-12-06 11:28:15.616959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.525 [2024-12-06 11:28:15.627087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.525 [2024-12-06 11:28:15.627106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.525 [2024-12-06 11:28:15.627113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.525 [2024-12-06 11:28:15.637198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.525 [2024-12-06 11:28:15.637218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.525 [2024-12-06 11:28:15.637225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.525 [2024-12-06 11:28:15.646984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.525 [2024-12-06 11:28:15.647004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.525 [2024-12-06 11:28:15.647010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.525 [2024-12-06 11:28:15.655970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.525 [2024-12-06 11:28:15.655993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.525 [2024-12-06 11:28:15.656000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.525 [2024-12-06 11:28:15.663974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.525 [2024-12-06 11:28:15.663993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.525 [2024-12-06 11:28:15.664000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.525 3155.00 IOPS, 394.38 MiB/s [2024-12-06T10:28:15.692Z] [2024-12-06 11:28:15.673354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.525 [2024-12-06 11:28:15.673373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.525 [2024-12-06 11:28:15.673380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.525 [2024-12-06 11:28:15.682180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.525 [2024-12-06 11:28:15.682199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.525 [2024-12-06 11:28:15.682206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.691906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.691925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.691931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.702413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.702432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.702439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.708441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.708460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.708468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.719773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.719792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.719799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.730154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.730173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.730180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.739779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.739799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.739805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.749965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.749985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.749992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.760612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.760631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.760637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.770482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.770502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.770509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.778883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.778902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.778908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.787145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.787164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.787170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.796546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.796565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.796572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.807542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.807561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.807568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.816313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.816332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.816342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.826533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.826551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.826558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.836143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.786 [2024-12-06 11:28:15.836163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.786 [2024-12-06 11:28:15.836170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.786 [2024-12-06 11:28:15.847132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.787 [2024-12-06 11:28:15.847151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.787 [2024-12-06 11:28:15.847157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.787 [2024-12-06 11:28:15.856358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.787 [2024-12-06 11:28:15.856378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.787 [2024-12-06 11:28:15.856384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.787 [2024-12-06 11:28:15.866834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.787 [2024-12-06 11:28:15.866852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.787 [2024-12-06 11:28:15.866859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.787 [2024-12-06 11:28:15.877851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.787 [2024-12-06 11:28:15.877875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.787 [2024-12-06 11:28:15.877881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.787 [2024-12-06 11:28:15.886744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.787 [2024-12-06 11:28:15.886766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.787 [2024-12-06 11:28:15.886772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.787 [2024-12-06 11:28:15.897705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.787 [2024-12-06 11:28:15.897724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.787 [2024-12-06 11:28:15.897730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.787 [2024-12-06 11:28:15.906023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.787 [2024-12-06 11:28:15.906043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.787 [2024-12-06 11:28:15.906050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.787 [2024-12-06 11:28:15.915594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.787 [2024-12-06 11:28:15.915614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.787 [2024-12-06 11:28:15.915620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.787 [2024-12-06 11:28:15.926633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.787 [2024-12-06 11:28:15.926652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.787 [2024-12-06 11:28:15.926659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.787 [2024-12-06 11:28:15.939077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.787 [2024-12-06 11:28:15.939096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.787 [2024-12-06 11:28:15.939103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.787 [2024-12-06 11:28:15.949556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:09.787 [2024-12-06 11:28:15.949576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.787 [2024-12-06 11:28:15.949583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.047 [2024-12-06 11:28:15.961590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.047 [2024-12-06 11:28:15.961610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:15.961617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:15.973904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:15.973924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:15.973931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:15.982467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:15.982486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:15.982492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:15.992372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:15.992392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:15.992405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.003874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.003893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.003900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.015172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.015192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.015199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.025548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.025567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.025574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.036025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.036044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.036051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.043165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.043185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.043191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.049480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.049499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.049506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.059432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.059451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.059459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.068020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.068039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.068046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.073968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.073990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.073997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.083349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.083368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.083374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.092247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.092266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.092272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.101171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.101190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.101196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.106740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.106760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.106766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.115917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.115936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.115943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.124153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.124172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.124178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.132495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.132513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.132520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.140943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.140961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.140968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.148768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.148787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.148794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.155398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.155417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.155423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.165219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.165239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.165245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.174392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.174411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.174417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.185782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.185801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.185808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.195027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.195047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.048 [2024-12-06 11:28:16.195053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.048 [2024-12-06 11:28:16.204616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.048 [2024-12-06 11:28:16.204635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.049 [2024-12-06 11:28:16.204641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.049 [2024-12-06 11:28:16.213253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.049 [2024-12-06 11:28:16.213272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.049 [2024-12-06 11:28:16.213279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.222105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.222124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.222134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.230422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.230440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.230447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.237076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.237095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.237101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.248268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.248287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.248293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.257661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.257679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.257686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.267443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.267462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.267468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.276176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.276195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.276201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.285295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.285314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.285321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.293835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.293853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.293860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.301524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.301546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.301553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.309668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.309686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.309692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.316968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.316986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.316993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.325792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.325810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.325817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.333456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.333474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.333481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.342212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.342230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.342237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.348247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.348265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.348272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.359729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.359747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.359754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.368651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.368670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.368676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.380987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.381006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.381012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.392317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.392336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.392342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.403886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.403905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.403911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.415717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.415737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.415743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.425870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.425889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.310 [2024-12-06 11:28:16.425895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.310 [2024-12-06 11:28:16.436127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.310 [2024-12-06 11:28:16.436145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.311 [2024-12-06 11:28:16.436152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.311 [2024-12-06 11:28:16.439254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.311 [2024-12-06 11:28:16.439272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.311 [2024-12-06 11:28:16.439279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.311 [2024-12-06 11:28:16.451000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.311 [2024-12-06 11:28:16.451019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.311 [2024-12-06 11:28:16.451026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.311 [2024-12-06 11:28:16.457020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.311 [2024-12-06 11:28:16.457038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.311 [2024-12-06 11:28:16.457048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.311 [2024-12-06 11:28:16.464492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.311 [2024-12-06 11:28:16.464511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.311 [2024-12-06 11:28:16.464517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.311 [2024-12-06 11:28:16.473409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.311 [2024-12-06 11:28:16.473428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.311 [2024-12-06 11:28:16.473434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.572 [2024-12-06 11:28:16.483494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.572 [2024-12-06 11:28:16.483513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.572 [2024-12-06 11:28:16.483520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.572 [2024-12-06 11:28:16.492514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.572 [2024-12-06 11:28:16.492533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.492539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.500233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.500252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.500259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.508869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.508887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.508894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.516149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.516168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.516175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.522239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.522258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.522265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.529402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.529424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.529431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.537900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.537918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.537925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.543882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.543901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.543908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.549891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.549909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.549916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.559013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.559032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.559039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.567128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.567147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.567153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.578715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.578734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.578741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.586376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.586394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.586401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.592605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.592624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.592630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.598416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.598436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.598443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.604805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.604824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.604830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.613774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.613793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.613799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.621673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.621692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.621700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.626946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.626964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.626971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.635743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.635762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.635768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.642893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.642911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.642918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.650066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.650085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.650092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.659338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.659357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.659366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.573 [2024-12-06 11:28:16.665961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.665979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.665986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.573 3313.50 IOPS, 414.19 MiB/s [2024-12-06T10:28:16.740Z] [2024-12-06 11:28:16.674400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8091b0) 00:29:10.573 [2024-12-06 11:28:16.674419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.573 [2024-12-06 11:28:16.674426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.573 00:29:10.573 Latency(us) 00:29:10.573 [2024-12-06T10:28:16.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.574 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:10.574 nvme0n1 : 2.00 3316.67 414.58 0.00 0.00 4819.42 781.65 18240.85 00:29:10.574 [2024-12-06T10:28:16.741Z] =================================================================================================================== 00:29:10.574 [2024-12-06T10:28:16.741Z] Total : 3316.67 414.58 0.00 0.00 4819.42 781.65 18240.85 00:29:10.574 { 00:29:10.574 "results": [ 00:29:10.574 { 00:29:10.574 "job": "nvme0n1", 00:29:10.574 "core_mask": "0x2", 00:29:10.574 "workload": "randread", 00:29:10.574 "status": "finished", 00:29:10.574 "queue_depth": 16, 00:29:10.574 "io_size": 131072, 00:29:10.574 "runtime": 2.002912, 00:29:10.574 "iops": 3316.670927130099, 00:29:10.574 "mibps": 414.58386589126235, 00:29:10.574 "io_failed": 0, 00:29:10.574 "io_timeout": 0, 00:29:10.574 "avg_latency_us": 4819.418311004064, 00:29:10.574 "min_latency_us": 781.6533333333333, 00:29:10.574 "max_latency_us": 18240.853333333333 00:29:10.574 } 00:29:10.574 ], 00:29:10.574 "core_count": 1 00:29:10.574 } 00:29:10.574 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:10.574 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:10.574 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:10.574 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:10.574 | .driver_specific 00:29:10.574 | .nvme_error 00:29:10.574 | .status_code 00:29:10.574 | .command_transient_transport_error' 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3613361 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3613361 ']' 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3613361 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3613361 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3613361' 00:29:10.834 killing process with pid 3613361 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3613361 00:29:10.834 Received shutdown signal, test time was about 2.000000 seconds 00:29:10.834 00:29:10.834 Latency(us) 00:29:10.834 [2024-12-06T10:28:17.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.834 [2024-12-06T10:28:17.001Z] =================================================================================================================== 00:29:10.834 [2024-12-06T10:28:17.001Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.834 11:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3613361 00:29:11.095 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:11.095 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:11.095 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:11.095 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:11.095 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:11.095 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3614053 00:29:11.095 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3614053 /var/tmp/bperf.sock 00:29:11.095 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3614053 ']' 00:29:11.096 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:11.096 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:11.096 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.096 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:11.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:11.096 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.096 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.096 [2024-12-06 11:28:17.096633] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:29:11.096 [2024-12-06 11:28:17.096692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614053 ] 00:29:11.096 [2024-12-06 11:28:17.186334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.096 [2024-12-06 11:28:17.216070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.039 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.039 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:12.039 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:12.039 11:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:12.039 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:12.039 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.039 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.039 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.039 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.039 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.299 nvme0n1 00:29:12.299 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:12.299 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.299 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.560 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.560 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:12.560 11:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:12.560 Running I/O for 2 seconds... 00:29:12.560 [2024-12-06 11:28:18.574148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eedd58 00:29:12.560 [2024-12-06 11:28:18.576021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.560 [2024-12-06 11:28:18.576048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:12.560 [2024-12-06 11:28:18.583882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eed0b0 00:29:12.560 [2024-12-06 11:28:18.585088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.560 [2024-12-06 11:28:18.585105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:12.560 [2024-12-06 11:28:18.596572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eed0b0 00:29:12.560 [2024-12-06 11:28:18.597767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.560 [2024-12-06 11:28:18.597784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:12.560 [2024-12-06 11:28:18.607657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef9f68 00:29:12.560 [2024-12-06 11:28:18.608835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.560 [2024-12-06 11:28:18.608852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:12.560 [2024-12-06 11:28:18.620386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efb048 00:29:12.560 [2024-12-06 11:28:18.621561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.560 [2024-12-06 11:28:18.621578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:12.560 [2024-12-06 11:28:18.632233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eed4e8 00:29:12.560 [2024-12-06 11:28:18.633414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.560 [2024-12-06 11:28:18.633438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:12.560 [2024-12-06 11:28:18.644105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eed4e8 00:29:12.560 [2024-12-06 11:28:18.645285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.560 [2024-12-06 11:28:18.645302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:12.560 [2024-12-06 11:28:18.655946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eed4e8 00:29:12.560 [2024-12-06 11:28:18.657098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.561 [2024-12-06 11:28:18.657115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:12.561 [2024-12-06 11:28:18.667742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efe720 00:29:12.561 [2024-12-06 11:28:18.668908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.561 [2024-12-06 11:28:18.668925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:12.561 [2024-12-06 11:28:18.679606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efe720 00:29:12.561 [2024-12-06 11:28:18.680775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.561 [2024-12-06 11:28:18.680791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:12.561 [2024-12-06 11:28:18.691476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efe720 00:29:12.561 [2024-12-06 11:28:18.692610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.561 [2024-12-06 11:28:18.692626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:12.561 [2024-12-06 11:28:18.703270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eecc78 00:29:12.561 [2024-12-06 11:28:18.704427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.561 [2024-12-06 11:28:18.704443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:12.561 [2024-12-06 11:28:18.715162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eecc78 00:29:12.561 [2024-12-06 11:28:18.716312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.561 [2024-12-06 11:28:18.716328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.726947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efa7d8 00:29:12.823 [2024-12-06 11:28:18.728111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.728128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.738772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efa7d8 00:29:12.823 [2024-12-06 11:28:18.739922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.739938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.750591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efa7d8 00:29:12.823 [2024-12-06 11:28:18.751744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.751760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.762406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efa7d8 00:29:12.823 [2024-12-06 11:28:18.763554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.763571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.774211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efa7d8 00:29:12.823 [2024-12-06 11:28:18.775361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.775379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.785981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eec408 00:29:12.823 [2024-12-06 11:28:18.787116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.787132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.797037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eff3c8 00:29:12.823 [2024-12-06 11:28:18.798121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.798138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.809611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eff3c8 00:29:12.823 [2024-12-06 11:28:18.810742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.810760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.821464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eff3c8 00:29:12.823 [2024-12-06 11:28:18.822592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.822610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.833291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eff3c8 00:29:12.823 [2024-12-06 11:28:18.834415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.834433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.845130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eff3c8 00:29:12.823 [2024-12-06 11:28:18.846226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.846242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.856906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eebb98 00:29:12.823 [2024-12-06 11:28:18.858014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.858030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.868742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eebb98 00:29:12.823 [2024-12-06 11:28:18.869848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.869868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.880580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eebb98 00:29:12.823 [2024-12-06 11:28:18.881694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.881712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.892407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eebb98 00:29:12.823 [2024-12-06 11:28:18.893523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.893540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.904248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eebb98 00:29:12.823 [2024-12-06 11:28:18.905369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.905385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.916089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eebb98 00:29:12.823 [2024-12-06 11:28:18.917202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.917218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.927925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eebb98 00:29:12.823 [2024-12-06 11:28:18.929047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.929064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.941274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eebb98 00:29:12.823 [2024-12-06 11:28:18.943028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.943047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:12.823 [2024-12-06 11:28:18.951579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef96f8 00:29:12.823 [2024-12-06 11:28:18.952667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.823 [2024-12-06 11:28:18.952684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:12.824 [2024-12-06 11:28:18.964873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef96f8 00:29:12.824 [2024-12-06 11:28:18.966624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.824 [2024-12-06 11:28:18.966640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:12.824 [2024-12-06 11:28:18.975234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eeaab8 00:29:12.824 [2024-12-06 11:28:18.976337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.824 [2024-12-06 11:28:18.976354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:12.824 [2024-12-06 11:28:18.987046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efd640 00:29:12.824 [2024-12-06 11:28:18.988147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.824 [2024-12-06 11:28:18.988164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:13.086 [2024-12-06 11:28:18.998878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efd640 00:29:13.086 [2024-12-06 11:28:18.999976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.086 [2024-12-06 11:28:18.999993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:13.086 [2024-12-06 11:28:19.010705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efd640 00:29:13.086 [2024-12-06 11:28:19.011801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.086 [2024-12-06 11:28:19.011819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:13.086 [2024-12-06 11:28:19.022530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efd640 00:29:13.086 [2024-12-06 11:28:19.023627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.086 [2024-12-06 11:28:19.023644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:13.086 [2024-12-06 11:28:19.034350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efd640 00:29:13.086 [2024-12-06 11:28:19.035452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.086 [2024-12-06 11:28:19.035469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:13.086 [2024-12-06 11:28:19.046180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efd640 00:29:13.086 [2024-12-06 11:28:19.047256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.086 [2024-12-06 11:28:19.047274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:13.086 [2024-12-06 11:28:19.057214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eea248 00:29:13.086 [2024-12-06 11:28:19.058295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.086 [2024-12-06 11:28:19.058312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:13.086 [2024-12-06 11:28:19.071375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eeb328 00:29:13.086 [2024-12-06 11:28:19.073093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.086 [2024-12-06 11:28:19.073110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:13.086 [2024-12-06 11:28:19.081670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef8e88 00:29:13.086 [2024-12-06 11:28:19.082772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.086 [2024-12-06 11:28:19.082789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:13.086 [2024-12-06 11:28:19.093494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef8e88 00:29:13.087 [2024-12-06 11:28:19.094540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.094557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.105386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef8e88 00:29:13.087 [2024-12-06 11:28:19.106495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.106511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.117257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef8e88 00:29:13.087 [2024-12-06 11:28:19.118346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.118363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.129029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eeaab8 00:29:13.087 [2024-12-06 11:28:19.130119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.130136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.140828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef8618 00:29:13.087 [2024-12-06 11:28:19.141898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.141915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.152662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef8618 00:29:13.087 [2024-12-06 11:28:19.153739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.153756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.164474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef8618 00:29:13.087 [2024-12-06 11:28:19.165576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.165593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.176303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef8618 00:29:13.087 [2024-12-06 11:28:19.177362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.177379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.188149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef8618 00:29:13.087 [2024-12-06 11:28:19.189210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.189228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.199199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef0788 00:29:13.087 [2024-12-06 11:28:19.200244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.200261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.213370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eef6a8 00:29:13.087 [2024-12-06 11:28:19.215063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.215080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.223646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efc560 00:29:13.087 [2024-12-06 11:28:19.224587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.224603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.234625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efbcf0 00:29:13.087 [2024-12-06 11:28:19.235518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.235535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:13.087 [2024-12-06 11:28:19.248625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee88f8 00:29:13.087 [2024-12-06 11:28:19.250293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.087 [2024-12-06 11:28:19.250313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:13.348 [2024-12-06 11:28:19.258857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee8088 00:29:13.348 [2024-12-06 11:28:19.259872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.348 [2024-12-06 11:28:19.259890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:13.348 [2024-12-06 11:28:19.270711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee8088 00:29:13.348 [2024-12-06 11:28:19.271721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.348 [2024-12-06 11:28:19.271737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:13.348 [2024-12-06 11:28:19.282509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee8088 00:29:13.348 [2024-12-06 11:28:19.283502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.348 [2024-12-06 11:28:19.283519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.293508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee0630 00:29:13.349 [2024-12-06 11:28:19.294488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.294504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.306135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef1430 00:29:13.349 [2024-12-06 11:28:19.307133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.307150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.318007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef1430 00:29:13.349 [2024-12-06 11:28:19.318994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.319010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.329825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef1430 00:29:13.349 [2024-12-06 11:28:19.330823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.330839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.340872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef6020 00:29:13.349 [2024-12-06 11:28:19.341800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.341816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.353436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef6020 00:29:13.349 [2024-12-06 11:28:19.354431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.354449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.365474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef6020 00:29:13.349 [2024-12-06 11:28:19.366457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.366474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.377303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef6020 00:29:13.349 [2024-12-06 11:28:19.378278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.378295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.389132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef6020 00:29:13.349 [2024-12-06 11:28:19.390127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.390144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.400261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef1ca0 00:29:13.349 [2024-12-06 11:28:19.401218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.401234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.412893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef0bc0 00:29:13.349 [2024-12-06 11:28:19.413851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.413872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.426291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eefae0 00:29:13.349 [2024-12-06 11:28:19.427922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.427939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.436584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee01f8 00:29:13.349 [2024-12-06 11:28:19.437557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.437574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.447619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4de8 00:29:13.349 [2024-12-06 11:28:19.448534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.448551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.460195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4de8 00:29:13.349 [2024-12-06 11:28:19.461141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.461157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.472017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4de8 00:29:13.349 [2024-12-06 11:28:19.472977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.472993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.483829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4de8 00:29:13.349 [2024-12-06 11:28:19.484748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.484765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.495651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4de8 00:29:13.349 [2024-12-06 11:28:19.496614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.496631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:13.349 [2024-12-06 11:28:19.507500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4de8 00:29:13.349 [2024-12-06 11:28:19.508463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.349 [2024-12-06 11:28:19.508481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:13.611 [2024-12-06 11:28:19.519335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4de8 00:29:13.611 [2024-12-06 11:28:19.520299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.611 [2024-12-06 11:28:19.520315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:13.611 [2024-12-06 11:28:19.531204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4de8 00:29:13.611 [2024-12-06 11:28:19.532175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.611 [2024-12-06 11:28:19.532192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:13.611 [2024-12-06 11:28:19.543027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4de8 00:29:13.611 [2024-12-06 11:28:19.543986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.611 [2024-12-06 11:28:19.544003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:13.611 [2024-12-06 11:28:19.554856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4de8 00:29:13.611 [2024-12-06 11:28:19.556348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.611 [2024-12-06 11:28:19.556369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:13.611 21358.00 IOPS, 83.43 MiB/s [2024-12-06T10:28:19.778Z] [2024-12-06 11:28:19.565911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016edf988 00:29:13.611 [2024-12-06 11:28:19.566846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.611 [2024-12-06 11:28:19.566865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:13.611 [2024-12-06 11:28:19.580109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee0a68 00:29:13.611 [2024-12-06 11:28:19.581725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.611 [2024-12-06 11:28:19.581742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:13.611 [2024-12-06 11:28:19.589668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee27f0 00:29:13.611 [2024-12-06 11:28:19.590572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.611 [2024-12-06 11:28:19.590590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:13.611 [2024-12-06 11:28:19.602249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee27f0 00:29:13.611 [2024-12-06 11:28:19.603200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.611 [2024-12-06 11:28:19.603217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.611 [2024-12-06 11:28:19.614071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee27f0 00:29:13.611 [2024-12-06 11:28:19.615020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.611 [2024-12-06 11:28:19.615037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.611 [2024-12-06 11:28:19.625903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee27f0 00:29:13.611 [2024-12-06 11:28:19.626846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.611 [2024-12-06 11:28:19.626866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.611 [2024-12-06 11:28:19.637730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee27f0 00:29:13.612 [2024-12-06 11:28:19.638687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.638704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.612 [2024-12-06 11:28:19.649501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef57b0 00:29:13.612 [2024-12-06 11:28:19.650445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.650461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.612 [2024-12-06 11:28:19.661329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef57b0 00:29:13.612 [2024-12-06 11:28:19.662243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.662260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.612 [2024-12-06 11:28:19.673145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef57b0 00:29:13.612 [2024-12-06 11:28:19.674130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.674147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.612 [2024-12-06 11:28:19.684996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016edfdc0 00:29:13.612 [2024-12-06 11:28:19.685918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.685935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:13.612 [2024-12-06 11:28:19.696839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016edfdc0 00:29:13.612 [2024-12-06 11:28:19.697731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.697748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:13.612 [2024-12-06 11:28:19.708616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee5220 00:29:13.612 [2024-12-06 11:28:19.709537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.709553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:13.612 [2024-12-06 11:28:19.720439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee5220 00:29:13.612 [2024-12-06 11:28:19.721322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.721339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:13.612 [2024-12-06 11:28:19.732204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016edf550 00:29:13.612 [2024-12-06 11:28:19.733114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.733130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:13.612 [2024-12-06 11:28:19.744083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016edf550 00:29:13.612 [2024-12-06 11:28:19.744993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.745009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:13.612 [2024-12-06 11:28:19.755901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016edf550 00:29:13.612 [2024-12-06 11:28:19.756805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.756821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:13.612 [2024-12-06 11:28:19.767742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016edf550 00:29:13.612 [2024-12-06 11:28:19.768655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.612 [2024-12-06 11:28:19.768671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.781095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016edf550 00:29:13.874 [2024-12-06 11:28:19.782654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.782670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.790651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef46d0 00:29:13.874 [2024-12-06 11:28:19.791535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.791551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.803276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4f40 00:29:13.874 [2024-12-06 11:28:19.804173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.804189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.815118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4f40 00:29:13.874 [2024-12-06 11:28:19.816017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.816033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.826939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4f40 00:29:13.874 [2024-12-06 11:28:19.827832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.827848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.838758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4f40 00:29:13.874 [2024-12-06 11:28:19.839659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.839676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.850593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4f40 00:29:13.874 [2024-12-06 11:28:19.851497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.851514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.862425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4f40 00:29:13.874 [2024-12-06 11:28:19.863330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.863350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.874248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4f40 00:29:13.874 [2024-12-06 11:28:19.875103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.875120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.886075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4f40 00:29:13.874 [2024-12-06 11:28:19.886932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.886948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.897915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4f40 00:29:13.874 [2024-12-06 11:28:19.898804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.898821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.911250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4f40 00:29:13.874 [2024-12-06 11:28:19.912792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.912809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.921602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef2510 00:29:13.874 [2024-12-06 11:28:19.922508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.922525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.933408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef35f0 00:29:13.874 [2024-12-06 11:28:19.934304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.934321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.945267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef35f0 00:29:13.874 [2024-12-06 11:28:19.946143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.946160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.957108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef35f0 00:29:13.874 [2024-12-06 11:28:19.957998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.958015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.968140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ede470 00:29:13.874 [2024-12-06 11:28:19.969030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.874 [2024-12-06 11:28:19.969046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:13.874 [2024-12-06 11:28:19.980741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4298 00:29:13.875 [2024-12-06 11:28:19.981618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.875 [2024-12-06 11:28:19.981635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:13.875 [2024-12-06 11:28:19.992559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4298 00:29:13.875 [2024-12-06 11:28:19.993412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.875 [2024-12-06 11:28:19.993430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:13.875 [2024-12-06 11:28:20.004891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4298 00:29:13.875 [2024-12-06 11:28:20.005771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.875 [2024-12-06 11:28:20.005788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:13.875 [2024-12-06 11:28:20.015934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ede8a8 00:29:13.875 [2024-12-06 11:28:20.016675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.875 [2024-12-06 11:28:20.016692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:13.875 [2024-12-06 11:28:20.028494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef3a28 00:29:13.875 [2024-12-06 11:28:20.029356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.875 [2024-12-06 11:28:20.029373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.041838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef3a28 00:29:14.136 [2024-12-06 11:28:20.043303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.043320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.054427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef3a28 00:29:14.136 [2024-12-06 11:28:20.055938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.055954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.066289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef3a28 00:29:14.136 [2024-12-06 11:28:20.067798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.067815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.078141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef3a28 00:29:14.136 [2024-12-06 11:28:20.079652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.079669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.091470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef3a28 00:29:14.136 [2024-12-06 11:28:20.093623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.093639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.101103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4140 00:29:14.136 [2024-12-06 11:28:20.102594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.102610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.115273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef3a28 00:29:14.136 [2024-12-06 11:28:20.117423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.117439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.124842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef31b8 00:29:14.136 [2024-12-06 11:28:20.126337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.126353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.137433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef2510 00:29:14.136 [2024-12-06 11:28:20.138931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.138948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.149316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef2510 00:29:14.136 [2024-12-06 11:28:20.150814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.150830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.161141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef2510 00:29:14.136 [2024-12-06 11:28:20.162646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.162663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.172962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef2510 00:29:14.136 [2024-12-06 11:28:20.174450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.174470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.184782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef2510 00:29:14.136 [2024-12-06 11:28:20.186293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.186309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.196602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef2510 00:29:14.136 [2024-12-06 11:28:20.198109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.198125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.209923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef2510 00:29:14.136 [2024-12-06 11:28:20.212039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.212055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.220228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef31b8 00:29:14.136 [2024-12-06 11:28:20.221677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.221694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.233563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4b08 00:29:14.136 [2024-12-06 11:28:20.235692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.235708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.243930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016edece0 00:29:14.136 [2024-12-06 11:28:20.245409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.245425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.254969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef35f0 00:29:14.136 [2024-12-06 11:28:20.256430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.256446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.267583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.136 [2024-12-06 11:28:20.269071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.269087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.279443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.136 [2024-12-06 11:28:20.280911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.280927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.136 [2024-12-06 11:28:20.291280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.136 [2024-12-06 11:28:20.292758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.136 [2024-12-06 11:28:20.292774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.397 [2024-12-06 11:28:20.303118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.397 [2024-12-06 11:28:20.304564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.397 [2024-12-06 11:28:20.304581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.397 [2024-12-06 11:28:20.314945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.397 [2024-12-06 11:28:20.316412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.397 [2024-12-06 11:28:20.316429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.397 [2024-12-06 11:28:20.326752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.397 [2024-12-06 11:28:20.328235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.397 [2024-12-06 11:28:20.328252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.397 [2024-12-06 11:28:20.338586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.397 [2024-12-06 11:28:20.340055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.397 [2024-12-06 11:28:20.340071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.397 [2024-12-06 11:28:20.350406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.397 [2024-12-06 11:28:20.351882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.397 [2024-12-06 11:28:20.351899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.397 [2024-12-06 11:28:20.362242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.397 [2024-12-06 11:28:20.363919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.397 [2024-12-06 11:28:20.363936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.397 [2024-12-06 11:28:20.374255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.397 [2024-12-06 11:28:20.375729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.397 [2024-12-06 11:28:20.375745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.397 [2024-12-06 11:28:20.386067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.397 [2024-12-06 11:28:20.387545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.397 [2024-12-06 11:28:20.387562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.397 [2024-12-06 11:28:20.397874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eddc00 00:29:14.397 [2024-12-06 11:28:20.399346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.399363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.409809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef35f0 00:29:14.398 [2024-12-06 11:28:20.411314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.411331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.420895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016edfdc0 00:29:14.398 [2024-12-06 11:28:20.422354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.422371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.432693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef0ff8 00:29:14.398 [2024-12-06 11:28:20.434114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.434131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.445246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016edece0 00:29:14.398 [2024-12-06 11:28:20.446685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.446701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.458641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee4140 00:29:14.398 [2024-12-06 11:28:20.460708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.460724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.470418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef4298 00:29:14.398 [2024-12-06 11:28:20.472515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.472532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.479977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ef0788 00:29:14.398 [2024-12-06 11:28:20.481411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.481433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.490664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016eebfd0 00:29:14.398 [2024-12-06 11:28:20.491608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.491624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.502632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee23b8 00:29:14.398 [2024-12-06 11:28:20.503553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.503570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.514443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016ee23b8 00:29:14.398 [2024-12-06 11:28:20.515390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.515407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.526191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efda78 00:29:14.398 [2024-12-06 11:28:20.527130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.527147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.538033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efda78 00:29:14.398 [2024-12-06 11:28:20.538952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.538969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:14.398 [2024-12-06 11:28:20.549877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efda78 00:29:14.398 [2024-12-06 11:28:20.550802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.550817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:14.398 21483.00 IOPS, 83.92 MiB/s [2024-12-06T10:28:20.565Z] [2024-12-06 11:28:20.561889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3ad0) with pdu=0x200016efda78 00:29:14.398 [2024-12-06 11:28:20.562811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.398 [2024-12-06 11:28:20.562826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:14.660 00:29:14.660 Latency(us) 00:29:14.660 [2024-12-06T10:28:20.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.660 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:14.660 nvme0n1 : 2.01 21483.97 83.92 0.00 0.00 5948.73 2280.11 15291.73 00:29:14.660 [2024-12-06T10:28:20.827Z] =================================================================================================================== 00:29:14.660 [2024-12-06T10:28:20.827Z] Total : 21483.97 83.92 0.00 0.00 5948.73 2280.11 15291.73 00:29:14.660 { 00:29:14.660 "results": [ 00:29:14.660 { 00:29:14.660 "job": "nvme0n1", 00:29:14.660 "core_mask": "0x2", 00:29:14.660 "workload": "randwrite", 00:29:14.660 "status": "finished", 00:29:14.660 "queue_depth": 128, 00:29:14.660 "io_size": 4096, 00:29:14.660 "runtime": 2.005868, 00:29:14.660 "iops": 21483.966043627996, 00:29:14.660 "mibps": 83.92174235792186, 00:29:14.660 "io_failed": 0, 00:29:14.660 "io_timeout": 0, 00:29:14.660 "avg_latency_us": 5948.73395151684, 00:29:14.660 "min_latency_us": 2280.1066666666666, 00:29:14.660 "max_latency_us": 15291.733333333334 00:29:14.660 } 00:29:14.660 ], 00:29:14.660 "core_count": 1 00:29:14.660 } 00:29:14.660 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:14.660 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:14.660 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:14.660 | .driver_specific 00:29:14.660 | .nvme_error 00:29:14.660 | .status_code 00:29:14.660 | .command_transient_transport_error' 00:29:14.660 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:14.660 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:29:14.660 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3614053 00:29:14.660 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3614053 ']' 00:29:14.660 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3614053 00:29:14.660 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:14.660 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.660 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3614053 00:29:14.921 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:14.921 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:14.921 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3614053' 00:29:14.921 killing process with pid 3614053 00:29:14.921 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3614053 00:29:14.921 Received shutdown signal, test time was about 2.000000 seconds 00:29:14.921 00:29:14.921 Latency(us) 00:29:14.921 [2024-12-06T10:28:21.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.921 [2024-12-06T10:28:21.089Z] =================================================================================================================== 00:29:14.922 [2024-12-06T10:28:21.089Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3614053 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3614868 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3614868 /var/tmp/bperf.sock 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3614868 ']' 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.922 11:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.922 [2024-12-06 11:28:20.988783] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:29:14.922 [2024-12-06 11:28:20.988841] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614868 ] 00:29:14.922 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:14.922 Zero copy mechanism will not be used. 00:29:14.922 [2024-12-06 11:28:21.077981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.183 [2024-12-06 11:28:21.107632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.755 11:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.755 11:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:15.755 11:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:15.755 11:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.015 11:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:16.015 11:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.015 11:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.015 11:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.015 11:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.015 11:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.275 nvme0n1 00:29:16.275 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:16.275 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.275 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.275 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.275 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:16.275 11:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:16.275 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:16.275 Zero copy mechanism will not be used. 00:29:16.275 Running I/O for 2 seconds... 00:29:16.538 [2024-12-06 11:28:22.446539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.538 [2024-12-06 11:28:22.446828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.538 [2024-12-06 11:28:22.446855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.538 [2024-12-06 11:28:22.456973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.538 [2024-12-06 11:28:22.457077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.538 [2024-12-06 11:28:22.457095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.467553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.467832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.467850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.479227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.479514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.479532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.487326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.487474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.487490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.495009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.495254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.495270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.503797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.503861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.503881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.510516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.510588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.510604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.516449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.516732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.516752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.526430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.526517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.526532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.537962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.538110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.538126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.548780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.548879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.548895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.559442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.559753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.559769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.569561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.569866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.569882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.580989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.581259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.581275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.592904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.593134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.593150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.600603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.600679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.600694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.606244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.606518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.606537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.612880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.612939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.612954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.620922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.621066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.621082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.630164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.630224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.630240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.637566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.637641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.637656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.644362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.644595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.644610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.651066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.651147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.651162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.657451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.657512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.657528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.664929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.665085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.665101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.673676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.673948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.673964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.681635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.681902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.539 [2024-12-06 11:28:22.681918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.539 [2024-12-06 11:28:22.692414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.539 [2024-12-06 11:28:22.692687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.540 [2024-12-06 11:28:22.692703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.540 [2024-12-06 11:28:22.703818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.801 [2024-12-06 11:28:22.704123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.801 [2024-12-06 11:28:22.704141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.715905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.716156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.716172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.727529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.727780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.727796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.739190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.739479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.739495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.750400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.750671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.750688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.762547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.762809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.762828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.774128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.774439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.774455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.785798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.785889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.785904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.796233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.796483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.796499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.808045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.808318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.808334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.819199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.819432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.819448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.830834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.831165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.831182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.842360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.842568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.842584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.853486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.853560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.853575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.864285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.864552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.864571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.876127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.876491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.876507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.888367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.888442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.888457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.897877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.898093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.898109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.908887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.909147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.909163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.918047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.918119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.918134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.926378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.926437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.926453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.934580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.934792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.934808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.943095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.943394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.943410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.951602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.951804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.951820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.958739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.958818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.958834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:16.802 [2024-12-06 11:28:22.966597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:16.802 [2024-12-06 11:28:22.966652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.802 [2024-12-06 11:28:22.966667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.063 [2024-12-06 11:28:22.975132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.063 [2024-12-06 11:28:22.975198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.063 [2024-12-06 11:28:22.975214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.063 [2024-12-06 11:28:22.981230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.063 [2024-12-06 11:28:22.981283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.063 [2024-12-06 11:28:22.981299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.063 [2024-12-06 11:28:22.985536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.063 [2024-12-06 11:28:22.985596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.063 [2024-12-06 11:28:22.985612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.063 [2024-12-06 11:28:22.989670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.063 [2024-12-06 11:28:22.989730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.063 [2024-12-06 11:28:22.989745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.063 [2024-12-06 11:28:22.995946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.063 [2024-12-06 11:28:22.996008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.063 [2024-12-06 11:28:22.996023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.063 [2024-12-06 11:28:23.002153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.063 [2024-12-06 11:28:23.002228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.063 [2024-12-06 11:28:23.002246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.063 [2024-12-06 11:28:23.007512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.007587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.007602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.017230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.017297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.017312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.023962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.024230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.024247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.030074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.030137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.030152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.037000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.037072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.037087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.044326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.044422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.044437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.049987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.050257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.050273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.058291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.058362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.058378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.066511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.066586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.066605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.073722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.073797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.073813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.079787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.079844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.079859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.085512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.085583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.085599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.092357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.092589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.092605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.099788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.099883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.099898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.107769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.108007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.108023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.117395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.117453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.117469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.123502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.123755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.123772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.132042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.132249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.132264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.138749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.138822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.138836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.146318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.146394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.146409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.153773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.153959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.153975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.162082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.162331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.162346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.169268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.169349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.169364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.174780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.174884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.174901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.181718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.181827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.181843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.189216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.189364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.189382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.194997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.064 [2024-12-06 11:28:23.195086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.064 [2024-12-06 11:28:23.195100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.064 [2024-12-06 11:28:23.202280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.065 [2024-12-06 11:28:23.202354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.065 [2024-12-06 11:28:23.202369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.065 [2024-12-06 11:28:23.211782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.065 [2024-12-06 11:28:23.212051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.065 [2024-12-06 11:28:23.212068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.065 [2024-12-06 11:28:23.220142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.065 [2024-12-06 11:28:23.220358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.065 [2024-12-06 11:28:23.220375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.065 [2024-12-06 11:28:23.227695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.065 [2024-12-06 11:28:23.227932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.065 [2024-12-06 11:28:23.227948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.325 [2024-12-06 11:28:23.235829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.325 [2024-12-06 11:28:23.236037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.325 [2024-12-06 11:28:23.236054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.325 [2024-12-06 11:28:23.244852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.325 [2024-12-06 11:28:23.245124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.325 [2024-12-06 11:28:23.245140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.325 [2024-12-06 11:28:23.253392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.325 [2024-12-06 11:28:23.253470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.325 [2024-12-06 11:28:23.253485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.325 [2024-12-06 11:28:23.261877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.325 [2024-12-06 11:28:23.262124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.325 [2024-12-06 11:28:23.262143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.325 [2024-12-06 11:28:23.269983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.325 [2024-12-06 11:28:23.270050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.325 [2024-12-06 11:28:23.270066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.325 [2024-12-06 11:28:23.279114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.279299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.279315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.289855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.290133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.290149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.300276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.300543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.300560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.311238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.311496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.311512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.321723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.322013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.322029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.332942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.333262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.333278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.343181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.343368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.343384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.353274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.353591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.353607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.361966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.362176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.362191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.368522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.368607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.368623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.374771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.375039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.375055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.381129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.381210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.381226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.386661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.386742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.386758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.394255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.394326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.394341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.402091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.402316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.402333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.411494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.411633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.411649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.421671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.421944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.421960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.432240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.433046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.433063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.326 3555.00 IOPS, 444.38 MiB/s [2024-12-06T10:28:23.493Z] [2024-12-06 11:28:23.443464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.443743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.443759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.453948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.454223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.454238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.464748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.465007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.465024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.475747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.476014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.476030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.326 [2024-12-06 11:28:23.487043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.326 [2024-12-06 11:28:23.487360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.326 [2024-12-06 11:28:23.487377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.587 [2024-12-06 11:28:23.498060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.587 [2024-12-06 11:28:23.498221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.587 [2024-12-06 11:28:23.498237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.587 [2024-12-06 11:28:23.508448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.587 [2024-12-06 11:28:23.508523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.587 [2024-12-06 11:28:23.508538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.587 [2024-12-06 11:28:23.519612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.587 [2024-12-06 11:28:23.519845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.587 [2024-12-06 11:28:23.519865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.587 [2024-12-06 11:28:23.530687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.587 [2024-12-06 11:28:23.531026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.587 [2024-12-06 11:28:23.531043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.587 [2024-12-06 11:28:23.541438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.587 [2024-12-06 11:28:23.541702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.587 [2024-12-06 11:28:23.541718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.587 [2024-12-06 11:28:23.552916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.587 [2024-12-06 11:28:23.553173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.587 [2024-12-06 11:28:23.553189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.587 [2024-12-06 11:28:23.563381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.587 [2024-12-06 11:28:23.563440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.587 [2024-12-06 11:28:23.563456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.587 [2024-12-06 11:28:23.574205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.587 [2024-12-06 11:28:23.574502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.587 [2024-12-06 11:28:23.574518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.587 [2024-12-06 11:28:23.584773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.587 [2024-12-06 11:28:23.585051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.587 [2024-12-06 11:28:23.585067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.587 [2024-12-06 11:28:23.594849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.587 [2024-12-06 11:28:23.595016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.587 [2024-12-06 11:28:23.595032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.587 [2024-12-06 11:28:23.605553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.587 [2024-12-06 11:28:23.605845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.605867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.615404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.615620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.615636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.625907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.626171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.626187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.636126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.636436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.636453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.646813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.647083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.647099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.657115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.657368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.657384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.667655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.667926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.667943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.678237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.678305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.678320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.689275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.689548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.689570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.700402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.700673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.700690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.709919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.710206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.710221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.720485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.720691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.720707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.731060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.731334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.731350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.740927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.741170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.741188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.588 [2024-12-06 11:28:23.751537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.588 [2024-12-06 11:28:23.751814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.588 [2024-12-06 11:28:23.751830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.849 [2024-12-06 11:28:23.761673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.849 [2024-12-06 11:28:23.761921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.761938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.771998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.772250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.772266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.781988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.782271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.782287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.792158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.792394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.792411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.803284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.803520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.803536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.813003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.813094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.813110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.820291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.820364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.820379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.827882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.828032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.828047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.835135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.835332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.835348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.842080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.842138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.842154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.849051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.849108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.849123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.856010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.856078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.856093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.865269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.865342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.865358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.872717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.873061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.873077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.880459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.880704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.880720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.890257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.890351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.890366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.900390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.900655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.900672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.910999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.911277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.911293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.921235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.921305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.921320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.931857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.932164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.932183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.942635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.942892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.942909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.952808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.953136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.953152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.962908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.963169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.963185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.971305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.971424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.971440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.978170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.978358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.978374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.986814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.986942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.986957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:23.995426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:23.995492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.850 [2024-12-06 11:28:23.995508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:17.850 [2024-12-06 11:28:24.000767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.850 [2024-12-06 11:28:24.001042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.851 [2024-12-06 11:28:24.001059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:17.851 [2024-12-06 11:28:24.006165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.851 [2024-12-06 11:28:24.006238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.851 [2024-12-06 11:28:24.006253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:17.851 [2024-12-06 11:28:24.009984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:17.851 [2024-12-06 11:28:24.010066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.851 [2024-12-06 11:28:24.010082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.113 [2024-12-06 11:28:24.015505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.113 [2024-12-06 11:28:24.015775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.113 [2024-12-06 11:28:24.015791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.113 [2024-12-06 11:28:24.023034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.113 [2024-12-06 11:28:24.023174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.113 [2024-12-06 11:28:24.023189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.113 [2024-12-06 11:28:24.029276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.113 [2024-12-06 11:28:24.029592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.113 [2024-12-06 11:28:24.029608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.113 [2024-12-06 11:28:24.033915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.113 [2024-12-06 11:28:24.033979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.113 [2024-12-06 11:28:24.033994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.113 [2024-12-06 11:28:24.037937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.113 [2024-12-06 11:28:24.038007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.113 [2024-12-06 11:28:24.038022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.113 [2024-12-06 11:28:24.041766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.113 [2024-12-06 11:28:24.041829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.113 [2024-12-06 11:28:24.041844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.113 [2024-12-06 11:28:24.045447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.113 [2024-12-06 11:28:24.045504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.113 [2024-12-06 11:28:24.045519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.113 [2024-12-06 11:28:24.049103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.113 [2024-12-06 11:28:24.049169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.113 [2024-12-06 11:28:24.049185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.053069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.053156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.053172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.056788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.056847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.056867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.060354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.060436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.060452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.063840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.063924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.063939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.068385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.068453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.068469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.071928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.071991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.072006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.075555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.075615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.075630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.078967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.079052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.079070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.085259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.085391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.085406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.091063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.091156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.091171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.095707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.095790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.095806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.099612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.099689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.099705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.106801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.107096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.107113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.114384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.114461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.114476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.118423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.118511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.118526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.122201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.122283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.122298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.126028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.126120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.126135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.129827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.129923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.129939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.137027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.137087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.137102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.140524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.140594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.140609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.143985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.144043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.144058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.150158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.150413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.150429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.155009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.155071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.155087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.158507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.158568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.158583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.162076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.162137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.162152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.166167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.166226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.166242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.169684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.114 [2024-12-06 11:28:24.169937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.114 [2024-12-06 11:28:24.169953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.114 [2024-12-06 11:28:24.177674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.177744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.177759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.181227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.181284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.181299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.184751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.184812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.184827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.188471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.188536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.188551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.192185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.192281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.192296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.200504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.200899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.200915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.211540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.211827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.211846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.221491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.221777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.221793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.231747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.232014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.232030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.241203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.241291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.241307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.250809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.251175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.251191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.260176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.260303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.115 [2024-12-06 11:28:24.269641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.115 [2024-12-06 11:28:24.269868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.115 [2024-12-06 11:28:24.269884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.376 [2024-12-06 11:28:24.280398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.376 [2024-12-06 11:28:24.280554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-12-06 11:28:24.280569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.376 [2024-12-06 11:28:24.289845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.376 [2024-12-06 11:28:24.290048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.290064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.298305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.298571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.298588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.308280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.308519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.308535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.317888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.317966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.317981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.324748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.324803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.324818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.328267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.328331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.328346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.331771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.331831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.331846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.335277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.335337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.335352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.339020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.339106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.339122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.343794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.343858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.343879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.347294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.347360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.347376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.351051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.351110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.351126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.354523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.354586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.354601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.358011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.358072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.358087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.361462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.361525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.361541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.365133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.365190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.365205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.368919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.368987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.369002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.377387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.377484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.377499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.384039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.384092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.384110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.388281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.388416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.388432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.394039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.394138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.394153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.400284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.400339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.400355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.404477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.404535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.404550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.409081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.409137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.409153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.412742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.412808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.412823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.416288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.416349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.416365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.419817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.419881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.377 [2024-12-06 11:28:24.419897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.377 [2024-12-06 11:28:24.423319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.377 [2024-12-06 11:28:24.423389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.378 [2024-12-06 11:28:24.423404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:18.378 [2024-12-06 11:28:24.427387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.378 [2024-12-06 11:28:24.427442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.378 [2024-12-06 11:28:24.427457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:18.378 [2024-12-06 11:28:24.434797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3e10) with pdu=0x200016efef90 00:29:18.378 [2024-12-06 11:28:24.435419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.378 [2024-12-06 11:28:24.435436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:18.378 3931.50 IOPS, 491.44 MiB/s 00:29:18.378 Latency(us) 00:29:18.378 [2024-12-06T10:28:24.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.378 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:18.378 nvme0n1 : 2.00 3932.73 491.59 0.00 0.00 4062.36 1549.65 15510.19 00:29:18.378 [2024-12-06T10:28:24.545Z] =================================================================================================================== 00:29:18.378 [2024-12-06T10:28:24.545Z] Total : 3932.73 491.59 0.00 0.00 4062.36 1549.65 15510.19 00:29:18.378 { 00:29:18.378 "results": [ 00:29:18.378 { 00:29:18.378 "job": "nvme0n1", 00:29:18.378 "core_mask": "0x2", 00:29:18.378 "workload": "randwrite", 00:29:18.378 "status": "finished", 00:29:18.378 "queue_depth": 16, 00:29:18.378 "io_size": 131072, 00:29:18.378 "runtime": 2.004208, 00:29:18.378 "iops": 3932.7255454523684, 00:29:18.378 "mibps": 491.59069318154604, 00:29:18.378 "io_failed": 0, 00:29:18.378 "io_timeout": 0, 00:29:18.378 "avg_latency_us": 4062.356884039584, 00:29:18.378 "min_latency_us": 1549.6533333333334, 00:29:18.378 "max_latency_us": 15510.186666666666 00:29:18.378 } 00:29:18.378 ], 00:29:18.378 "core_count": 1 00:29:18.378 } 00:29:18.378 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:18.378 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:18.378 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:18.378 | .driver_specific 00:29:18.378 | .nvme_error 00:29:18.378 | .status_code 00:29:18.378 | .command_transient_transport_error' 00:29:18.378 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 254 > 0 )) 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3614868 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3614868 ']' 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3614868 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3614868 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3614868' 00:29:18.638 killing process with pid 3614868 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3614868 00:29:18.638 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.638 00:29:18.638 Latency(us) 00:29:18.638 [2024-12-06T10:28:24.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.638 [2024-12-06T10:28:24.805Z] =================================================================================================================== 00:29:18.638 [2024-12-06T10:28:24.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.638 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3614868 00:29:18.898 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3612337 00:29:18.898 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3612337 ']' 00:29:18.898 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3612337 00:29:18.898 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:18.898 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.898 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3612337 00:29:18.898 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:18.898 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:18.898 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3612337' 00:29:18.898 killing process with pid 3612337 00:29:18.898 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3612337 00:29:18.898 11:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3612337 00:29:18.898 00:29:18.898 real 0m16.680s 00:29:18.898 user 0m33.013s 00:29:18.898 sys 0m3.558s 00:29:18.898 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.898 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.898 ************************************ 00:29:18.898 END TEST nvmf_digest_error 00:29:18.898 ************************************ 00:29:18.898 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:18.898 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:18.898 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.898 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:18.898 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.898 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:18.898 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.898 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.898 rmmod nvme_tcp 00:29:19.157 rmmod nvme_fabrics 00:29:19.157 rmmod nvme_keyring 00:29:19.157 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.157 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:19.157 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:19.157 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3612337 ']' 00:29:19.157 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3612337 00:29:19.157 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3612337 ']' 00:29:19.157 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3612337 00:29:19.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3612337) - No such process 00:29:19.157 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3612337 is not found' 00:29:19.157 Process with pid 3612337 is not found 00:29:19.157 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:19.158 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:19.158 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:19.158 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:19.158 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:19.158 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:19.158 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:19.158 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:19.158 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:19.158 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.158 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.158 11:28:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.065 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.065 00:29:21.065 real 0m44.310s 00:29:21.065 user 1m8.521s 00:29:21.065 sys 0m13.533s 00:29:21.065 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.065 11:28:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:21.065 ************************************ 00:29:21.065 END TEST nvmf_digest 00:29:21.065 ************************************ 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.325 ************************************ 00:29:21.325 START TEST nvmf_bdevperf 00:29:21.325 ************************************ 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:21.325 * Looking for test storage... 00:29:21.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:21.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.325 --rc genhtml_branch_coverage=1 00:29:21.325 --rc genhtml_function_coverage=1 00:29:21.325 --rc genhtml_legend=1 00:29:21.325 --rc geninfo_all_blocks=1 00:29:21.325 --rc geninfo_unexecuted_blocks=1 00:29:21.325 00:29:21.325 ' 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:21.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.325 --rc genhtml_branch_coverage=1 00:29:21.325 --rc genhtml_function_coverage=1 00:29:21.325 --rc genhtml_legend=1 00:29:21.325 --rc geninfo_all_blocks=1 00:29:21.325 --rc geninfo_unexecuted_blocks=1 00:29:21.325 00:29:21.325 ' 00:29:21.325 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:21.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.325 --rc genhtml_branch_coverage=1 00:29:21.326 --rc genhtml_function_coverage=1 00:29:21.326 --rc genhtml_legend=1 00:29:21.326 --rc geninfo_all_blocks=1 00:29:21.326 --rc geninfo_unexecuted_blocks=1 00:29:21.326 00:29:21.326 ' 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:21.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.326 --rc genhtml_branch_coverage=1 00:29:21.326 --rc genhtml_function_coverage=1 00:29:21.326 --rc genhtml_legend=1 00:29:21.326 --rc geninfo_all_blocks=1 00:29:21.326 --rc geninfo_unexecuted_blocks=1 00:29:21.326 00:29:21.326 ' 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.326 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:21.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:21.587 11:28:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.729 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:29.730 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:29.730 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:29.730 Found net devices under 0000:31:00.0: cvl_0_0 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:29.730 Found net devices under 0000:31:00.1: cvl_0_1 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:29.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:29:29.730 00:29:29.730 --- 10.0.0.2 ping statistics --- 00:29:29.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.730 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:29:29.730 00:29:29.730 --- 10.0.0.1 ping statistics --- 00:29:29.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.730 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3620432 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3620432 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3620432 ']' 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.730 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.731 11:28:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:29.990 [2024-12-06 11:28:35.934934] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:29:29.990 [2024-12-06 11:28:35.935003] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.990 [2024-12-06 11:28:36.043537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:29.990 [2024-12-06 11:28:36.095200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.990 [2024-12-06 11:28:36.095254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.990 [2024-12-06 11:28:36.095263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.990 [2024-12-06 11:28:36.095270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.990 [2024-12-06 11:28:36.095277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.990 [2024-12-06 11:28:36.097354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.990 [2024-12-06 11:28:36.097491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.990 [2024-12-06 11:28:36.097491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.929 [2024-12-06 11:28:36.798348] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.929 Malloc0 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.929 [2024-12-06 11:28:36.863968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:30.929 { 00:29:30.929 "params": { 00:29:30.929 "name": "Nvme$subsystem", 00:29:30.929 "trtype": "$TEST_TRANSPORT", 00:29:30.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.929 "adrfam": "ipv4", 00:29:30.929 "trsvcid": "$NVMF_PORT", 00:29:30.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.929 "hdgst": ${hdgst:-false}, 00:29:30.929 "ddgst": ${ddgst:-false} 00:29:30.929 }, 00:29:30.929 "method": "bdev_nvme_attach_controller" 00:29:30.929 } 00:29:30.929 EOF 00:29:30.929 )") 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:30.929 11:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:30.929 "params": { 00:29:30.929 "name": "Nvme1", 00:29:30.929 "trtype": "tcp", 00:29:30.929 "traddr": "10.0.0.2", 00:29:30.929 "adrfam": "ipv4", 00:29:30.929 "trsvcid": "4420", 00:29:30.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:30.929 "hdgst": false, 00:29:30.929 "ddgst": false 00:29:30.929 }, 00:29:30.929 "method": "bdev_nvme_attach_controller" 00:29:30.929 }' 00:29:30.929 [2024-12-06 11:28:36.920545] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:29:30.929 [2024-12-06 11:28:36.920601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3620488 ] 00:29:30.929 [2024-12-06 11:28:36.998515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.929 [2024-12-06 11:28:37.035192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.189 Running I/O for 1 seconds... 00:29:32.570 8823.00 IOPS, 34.46 MiB/s 00:29:32.570 Latency(us) 00:29:32.570 [2024-12-06T10:28:38.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.570 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:32.570 Verification LBA range: start 0x0 length 0x4000 00:29:32.570 Nvme1n1 : 1.01 8905.09 34.79 0.00 0.00 14286.13 2703.36 13598.72 00:29:32.570 [2024-12-06T10:28:38.737Z] =================================================================================================================== 00:29:32.570 [2024-12-06T10:28:38.737Z] Total : 8905.09 34.79 0.00 0.00 14286.13 2703.36 13598.72 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3620810 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.570 { 00:29:32.570 "params": { 00:29:32.570 "name": "Nvme$subsystem", 00:29:32.570 "trtype": "$TEST_TRANSPORT", 00:29:32.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.570 "adrfam": "ipv4", 00:29:32.570 "trsvcid": "$NVMF_PORT", 00:29:32.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.570 "hdgst": ${hdgst:-false}, 00:29:32.570 "ddgst": ${ddgst:-false} 00:29:32.570 }, 00:29:32.570 "method": "bdev_nvme_attach_controller" 00:29:32.570 } 00:29:32.570 EOF 00:29:32.570 )") 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:32.570 11:28:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:32.570 "params": { 00:29:32.570 "name": "Nvme1", 00:29:32.570 "trtype": "tcp", 00:29:32.570 "traddr": "10.0.0.2", 00:29:32.570 "adrfam": "ipv4", 00:29:32.570 "trsvcid": "4420", 00:29:32.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:32.570 "hdgst": false, 00:29:32.570 "ddgst": false 00:29:32.570 }, 00:29:32.570 "method": "bdev_nvme_attach_controller" 00:29:32.570 }' 00:29:32.570 [2024-12-06 11:28:38.528740] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:29:32.570 [2024-12-06 11:28:38.528796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3620810 ] 00:29:32.570 [2024-12-06 11:28:38.607612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.570 [2024-12-06 11:28:38.643498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.831 Running I/O for 15 seconds... 00:29:34.717 9477.00 IOPS, 37.02 MiB/s [2024-12-06T10:28:41.837Z] 10339.50 IOPS, 40.39 MiB/s [2024-12-06T10:28:41.837Z] 11:28:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3620432 00:29:35.670 11:28:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:35.670 [2024-12-06 11:28:41.492647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.670 [2024-12-06 11:28:41.492691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.670 [2024-12-06 11:28:41.492711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.670 [2024-12-06 11:28:41.492723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.670 [2024-12-06 11:28:41.492733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.670 [2024-12-06 11:28:41.492741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.670 [2024-12-06 11:28:41.492753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.670 [2024-12-06 11:28:41.492763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.670 [2024-12-06 11:28:41.492774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.670 [2024-12-06 11:28:41.492782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.670 [2024-12-06 11:28:41.492793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.670 [2024-12-06 11:28:41.492801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.670 [2024-12-06 11:28:41.492811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.671 [2024-12-06 11:28:41.492818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.492828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.671 [2024-12-06 11:28:41.492835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.492846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.671 [2024-12-06 11:28:41.492855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.492871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.671 [2024-12-06 11:28:41.492879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.492889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.671 [2024-12-06 11:28:41.492898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.492910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.671 [2024-12-06 11:28:41.492924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.492936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.671 [2024-12-06 11:28:41.492946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.492957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.671 [2024-12-06 11:28:41.492966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.492978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.671 [2024-12-06 11:28:41.492988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.671 [2024-12-06 11:28:41.493517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.671 [2024-12-06 11:28:41.493526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.493983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.493991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.672 [2024-12-06 11:28:41.494192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.672 [2024-12-06 11:28:41.494201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.673 [2024-12-06 11:28:41.494630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.673 [2024-12-06 11:28:41.494871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.673 [2024-12-06 11:28:41.494881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.674 [2024-12-06 11:28:41.494888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.674 [2024-12-06 11:28:41.494897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc94660 is same with the state(6) to be set 00:29:35.674 [2024-12-06 11:28:41.494906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:35.674 [2024-12-06 11:28:41.494912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:35.674 [2024-12-06 11:28:41.494919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92400 len:8 PRP1 0x0 PRP2 0x0 00:29:35.674 [2024-12-06 11:28:41.494927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.674 [2024-12-06 11:28:41.498495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.674 [2024-12-06 11:28:41.498549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.674 [2024-12-06 11:28:41.499310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.674 [2024-12-06 11:28:41.499348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.674 [2024-12-06 11:28:41.499359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.674 [2024-12-06 11:28:41.499605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.674 [2024-12-06 11:28:41.499834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.674 [2024-12-06 11:28:41.499844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.674 [2024-12-06 11:28:41.499853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.674 [2024-12-06 11:28:41.499872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.674 [2024-12-06 11:28:41.512801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.674 [2024-12-06 11:28:41.513439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.674 [2024-12-06 11:28:41.513478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.674 [2024-12-06 11:28:41.513489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.674 [2024-12-06 11:28:41.513732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.674 [2024-12-06 11:28:41.513967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.674 [2024-12-06 11:28:41.513978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.674 [2024-12-06 11:28:41.513986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.674 [2024-12-06 11:28:41.513994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.674 [2024-12-06 11:28:41.526733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.674 [2024-12-06 11:28:41.527210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.674 [2024-12-06 11:28:41.527230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.674 [2024-12-06 11:28:41.527239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.674 [2024-12-06 11:28:41.527462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.674 [2024-12-06 11:28:41.527685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.674 [2024-12-06 11:28:41.527694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.674 [2024-12-06 11:28:41.527701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.674 [2024-12-06 11:28:41.527708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.674 [2024-12-06 11:28:41.540657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.674 [2024-12-06 11:28:41.541243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.674 [2024-12-06 11:28:41.541281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.674 [2024-12-06 11:28:41.541292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.674 [2024-12-06 11:28:41.541535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.674 [2024-12-06 11:28:41.541774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.674 [2024-12-06 11:28:41.541784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.674 [2024-12-06 11:28:41.541793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.674 [2024-12-06 11:28:41.541802] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.674 [2024-12-06 11:28:41.554531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.674 [2024-12-06 11:28:41.554982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.674 [2024-12-06 11:28:41.555002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.674 [2024-12-06 11:28:41.555010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.674 [2024-12-06 11:28:41.555233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.674 [2024-12-06 11:28:41.555455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.674 [2024-12-06 11:28:41.555463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.674 [2024-12-06 11:28:41.555470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.674 [2024-12-06 11:28:41.555477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.674 [2024-12-06 11:28:41.568408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.674 [2024-12-06 11:28:41.568964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.674 [2024-12-06 11:28:41.568982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.674 [2024-12-06 11:28:41.568994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.674 [2024-12-06 11:28:41.569216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.674 [2024-12-06 11:28:41.569437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.674 [2024-12-06 11:28:41.569446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.674 [2024-12-06 11:28:41.569453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.674 [2024-12-06 11:28:41.569460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.674 [2024-12-06 11:28:41.582426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.674 [2024-12-06 11:28:41.583145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.674 [2024-12-06 11:28:41.583183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.674 [2024-12-06 11:28:41.583194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.674 [2024-12-06 11:28:41.583436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.674 [2024-12-06 11:28:41.583664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.674 [2024-12-06 11:28:41.583673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.674 [2024-12-06 11:28:41.583681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.674 [2024-12-06 11:28:41.583688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.674 [2024-12-06 11:28:41.596407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.674 [2024-12-06 11:28:41.596943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.674 [2024-12-06 11:28:41.596981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.674 [2024-12-06 11:28:41.596993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.674 [2024-12-06 11:28:41.597239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.674 [2024-12-06 11:28:41.597465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.674 [2024-12-06 11:28:41.597475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.674 [2024-12-06 11:28:41.597483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.674 [2024-12-06 11:28:41.597491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.674 [2024-12-06 11:28:41.610429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.674 [2024-12-06 11:28:41.611126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.674 [2024-12-06 11:28:41.611164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.674 [2024-12-06 11:28:41.611174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.674 [2024-12-06 11:28:41.611417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.674 [2024-12-06 11:28:41.611648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.674 [2024-12-06 11:28:41.611657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.674 [2024-12-06 11:28:41.611665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.674 [2024-12-06 11:28:41.611673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.674 [2024-12-06 11:28:41.624390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.675 [2024-12-06 11:28:41.624966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.675 [2024-12-06 11:28:41.624986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.675 [2024-12-06 11:28:41.624994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.675 [2024-12-06 11:28:41.625217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.675 [2024-12-06 11:28:41.625439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.675 [2024-12-06 11:28:41.625448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.675 [2024-12-06 11:28:41.625455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.675 [2024-12-06 11:28:41.625462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.675 [2024-12-06 11:28:41.638394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.675 [2024-12-06 11:28:41.638949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.675 [2024-12-06 11:28:41.638967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.675 [2024-12-06 11:28:41.638974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.675 [2024-12-06 11:28:41.639196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.675 [2024-12-06 11:28:41.639418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.675 [2024-12-06 11:28:41.639426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.675 [2024-12-06 11:28:41.639433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.675 [2024-12-06 11:28:41.639439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.675 [2024-12-06 11:28:41.652364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.675 [2024-12-06 11:28:41.653078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.675 [2024-12-06 11:28:41.653117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.675 [2024-12-06 11:28:41.653127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.675 [2024-12-06 11:28:41.653369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.675 [2024-12-06 11:28:41.653596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.675 [2024-12-06 11:28:41.653605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.675 [2024-12-06 11:28:41.653613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.675 [2024-12-06 11:28:41.653625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.675 [2024-12-06 11:28:41.666342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.675 [2024-12-06 11:28:41.666990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.675 [2024-12-06 11:28:41.667027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.675 [2024-12-06 11:28:41.667040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.675 [2024-12-06 11:28:41.667283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.675 [2024-12-06 11:28:41.667509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.675 [2024-12-06 11:28:41.667518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.675 [2024-12-06 11:28:41.667526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.675 [2024-12-06 11:28:41.667534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.675 [2024-12-06 11:28:41.680251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.675 [2024-12-06 11:28:41.680943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.675 [2024-12-06 11:28:41.680981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.675 [2024-12-06 11:28:41.680992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.675 [2024-12-06 11:28:41.681234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.675 [2024-12-06 11:28:41.681460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.675 [2024-12-06 11:28:41.681469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.675 [2024-12-06 11:28:41.681478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.675 [2024-12-06 11:28:41.681486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.675 [2024-12-06 11:28:41.694298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.675 [2024-12-06 11:28:41.694944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.675 [2024-12-06 11:28:41.694982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.675 [2024-12-06 11:28:41.694994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.675 [2024-12-06 11:28:41.695240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.675 [2024-12-06 11:28:41.695466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.675 [2024-12-06 11:28:41.695475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.675 [2024-12-06 11:28:41.695483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.675 [2024-12-06 11:28:41.695491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.675 [2024-12-06 11:28:41.708216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.675 [2024-12-06 11:28:41.708909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.675 [2024-12-06 11:28:41.708947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.675 [2024-12-06 11:28:41.708958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.675 [2024-12-06 11:28:41.709200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.675 [2024-12-06 11:28:41.709426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.675 [2024-12-06 11:28:41.709435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.675 [2024-12-06 11:28:41.709443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.675 [2024-12-06 11:28:41.709451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.675 [2024-12-06 11:28:41.722170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.675 [2024-12-06 11:28:41.722851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.675 [2024-12-06 11:28:41.722896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.675 [2024-12-06 11:28:41.722908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.675 [2024-12-06 11:28:41.723151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.675 [2024-12-06 11:28:41.723378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.676 [2024-12-06 11:28:41.723387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.676 [2024-12-06 11:28:41.723394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.676 [2024-12-06 11:28:41.723402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.676 [2024-12-06 11:28:41.736124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.676 [2024-12-06 11:28:41.736774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.676 [2024-12-06 11:28:41.736811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.676 [2024-12-06 11:28:41.736822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.676 [2024-12-06 11:28:41.737073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.676 [2024-12-06 11:28:41.737300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.676 [2024-12-06 11:28:41.737309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.676 [2024-12-06 11:28:41.737317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.676 [2024-12-06 11:28:41.737325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.676 [2024-12-06 11:28:41.750044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.676 [2024-12-06 11:28:41.750698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.676 [2024-12-06 11:28:41.750736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.676 [2024-12-06 11:28:41.750746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.676 [2024-12-06 11:28:41.751002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.676 [2024-12-06 11:28:41.751229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.676 [2024-12-06 11:28:41.751239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.676 [2024-12-06 11:28:41.751246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.676 [2024-12-06 11:28:41.751255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.676 [2024-12-06 11:28:41.764020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.676 [2024-12-06 11:28:41.764701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.676 [2024-12-06 11:28:41.764740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.676 [2024-12-06 11:28:41.764751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.676 [2024-12-06 11:28:41.765003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.676 [2024-12-06 11:28:41.765231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.676 [2024-12-06 11:28:41.765240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.676 [2024-12-06 11:28:41.765248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.676 [2024-12-06 11:28:41.765255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.676 [2024-12-06 11:28:41.777968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.676 [2024-12-06 11:28:41.778557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.676 [2024-12-06 11:28:41.778576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.676 [2024-12-06 11:28:41.778584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.676 [2024-12-06 11:28:41.778806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.676 [2024-12-06 11:28:41.779062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.676 [2024-12-06 11:28:41.779073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.676 [2024-12-06 11:28:41.779080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.676 [2024-12-06 11:28:41.779087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.676 [2024-12-06 11:28:41.792000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.676 [2024-12-06 11:28:41.792561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.676 [2024-12-06 11:28:41.792578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.676 [2024-12-06 11:28:41.792586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.676 [2024-12-06 11:28:41.792808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.676 [2024-12-06 11:28:41.793035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.676 [2024-12-06 11:28:41.793048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.676 [2024-12-06 11:28:41.793056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.676 [2024-12-06 11:28:41.793062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.676 [2024-12-06 11:28:41.805978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.676 [2024-12-06 11:28:41.806517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.676 [2024-12-06 11:28:41.806533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.676 [2024-12-06 11:28:41.806541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.676 [2024-12-06 11:28:41.806763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.676 [2024-12-06 11:28:41.806990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.676 [2024-12-06 11:28:41.806998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.676 [2024-12-06 11:28:41.807006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.676 [2024-12-06 11:28:41.807012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:35.676 [2024-12-06 11:28:41.819922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:35.676 [2024-12-06 11:28:41.820456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.676 [2024-12-06 11:28:41.820472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:35.676 [2024-12-06 11:28:41.820479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:35.676 [2024-12-06 11:28:41.820701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:35.676 [2024-12-06 11:28:41.820928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:35.676 [2024-12-06 11:28:41.820937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:35.676 [2024-12-06 11:28:41.820944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:35.676 [2024-12-06 11:28:41.820951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.049 [2024-12-06 11:28:41.833875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.049 [2024-12-06 11:28:41.834429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.049 [2024-12-06 11:28:41.834467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.049 [2024-12-06 11:28:41.834477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.049 [2024-12-06 11:28:41.834720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.049 [2024-12-06 11:28:41.834954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.049 [2024-12-06 11:28:41.834964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.049 [2024-12-06 11:28:41.834972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.049 [2024-12-06 11:28:41.834984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.049 9301.33 IOPS, 36.33 MiB/s [2024-12-06T10:28:42.216Z] [2024-12-06 11:28:41.847885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.049 [2024-12-06 11:28:41.848527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.049 [2024-12-06 11:28:41.848566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.049 [2024-12-06 11:28:41.848576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.049 [2024-12-06 11:28:41.848818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.049 [2024-12-06 11:28:41.849054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.049 [2024-12-06 11:28:41.849064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.049 [2024-12-06 11:28:41.849072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.049 [2024-12-06 11:28:41.849080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.049 [2024-12-06 11:28:41.861788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.049 [2024-12-06 11:28:41.862454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.049 [2024-12-06 11:28:41.862492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.049 [2024-12-06 11:28:41.862503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.049 [2024-12-06 11:28:41.862745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.049 [2024-12-06 11:28:41.862981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.049 [2024-12-06 11:28:41.862991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.049 [2024-12-06 11:28:41.862999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.049 [2024-12-06 11:28:41.863006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.049 [2024-12-06 11:28:41.875714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.049 [2024-12-06 11:28:41.876175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.049 [2024-12-06 11:28:41.876214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.049 [2024-12-06 11:28:41.876226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.049 [2024-12-06 11:28:41.876469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.049 [2024-12-06 11:28:41.876696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.049 [2024-12-06 11:28:41.876704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.049 [2024-12-06 11:28:41.876712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.049 [2024-12-06 11:28:41.876720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.049 [2024-12-06 11:28:41.889648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.049 [2024-12-06 11:28:41.890345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.049 [2024-12-06 11:28:41.890383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.049 [2024-12-06 11:28:41.890394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.049 [2024-12-06 11:28:41.890635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.049 [2024-12-06 11:28:41.890871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.049 [2024-12-06 11:28:41.890881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.049 [2024-12-06 11:28:41.890889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.049 [2024-12-06 11:28:41.890897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.049 [2024-12-06 11:28:41.903612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.049 [2024-12-06 11:28:41.904274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.049 [2024-12-06 11:28:41.904312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.049 [2024-12-06 11:28:41.904323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.049 [2024-12-06 11:28:41.904565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.049 [2024-12-06 11:28:41.904792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.049 [2024-12-06 11:28:41.904801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.049 [2024-12-06 11:28:41.904809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.050 [2024-12-06 11:28:41.904816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.050 [2024-12-06 11:28:41.917542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.050 [2024-12-06 11:28:41.918196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.050 [2024-12-06 11:28:41.918235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.050 [2024-12-06 11:28:41.918245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.050 [2024-12-06 11:28:41.918487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.050 [2024-12-06 11:28:41.918714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.050 [2024-12-06 11:28:41.918723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.050 [2024-12-06 11:28:41.918731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.050 [2024-12-06 11:28:41.918739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.050 [2024-12-06 11:28:41.931456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.050 [2024-12-06 11:28:41.932012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.050 [2024-12-06 11:28:41.932049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.050 [2024-12-06 11:28:41.932061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.050 [2024-12-06 11:28:41.932311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.050 [2024-12-06 11:28:41.932537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.050 [2024-12-06 11:28:41.932546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.050 [2024-12-06 11:28:41.932554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.050 [2024-12-06 11:28:41.932561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.050 [2024-12-06 11:28:41.945505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.050 [2024-12-06 11:28:41.946155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.050 [2024-12-06 11:28:41.946193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.050 [2024-12-06 11:28:41.946204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.050 [2024-12-06 11:28:41.946445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.050 [2024-12-06 11:28:41.946671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.050 [2024-12-06 11:28:41.946680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.050 [2024-12-06 11:28:41.946689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.050 [2024-12-06 11:28:41.946696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.050 [2024-12-06 11:28:41.959415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.050 [2024-12-06 11:28:41.959996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.050 [2024-12-06 11:28:41.960034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.050 [2024-12-06 11:28:41.960046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.050 [2024-12-06 11:28:41.960291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.050 [2024-12-06 11:28:41.960517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.050 [2024-12-06 11:28:41.960527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.050 [2024-12-06 11:28:41.960535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.050 [2024-12-06 11:28:41.960542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.050 [2024-12-06 11:28:41.973261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.050 [2024-12-06 11:28:41.973850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.050 [2024-12-06 11:28:41.973875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.050 [2024-12-06 11:28:41.973883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.050 [2024-12-06 11:28:41.974106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.050 [2024-12-06 11:28:41.974328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.050 [2024-12-06 11:28:41.974340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.050 [2024-12-06 11:28:41.974348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.050 [2024-12-06 11:28:41.974354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.050 [2024-12-06 11:28:41.987290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.050 [2024-12-06 11:28:41.987832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.050 [2024-12-06 11:28:41.987877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.050 [2024-12-06 11:28:41.987890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.050 [2024-12-06 11:28:41.988136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.050 [2024-12-06 11:28:41.988362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.050 [2024-12-06 11:28:41.988371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.050 [2024-12-06 11:28:41.988379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.050 [2024-12-06 11:28:41.988387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.050 [2024-12-06 11:28:42.001310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.050 [2024-12-06 11:28:42.001961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.050 [2024-12-06 11:28:42.001999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.050 [2024-12-06 11:28:42.002012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.050 [2024-12-06 11:28:42.002255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.050 [2024-12-06 11:28:42.002481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.050 [2024-12-06 11:28:42.002491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.050 [2024-12-06 11:28:42.002499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.050 [2024-12-06 11:28:42.002507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.050 [2024-12-06 11:28:42.015229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.050 [2024-12-06 11:28:42.015892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.050 [2024-12-06 11:28:42.015931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.050 [2024-12-06 11:28:42.015942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.050 [2024-12-06 11:28:42.016184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.050 [2024-12-06 11:28:42.016411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.050 [2024-12-06 11:28:42.016419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.050 [2024-12-06 11:28:42.016427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.050 [2024-12-06 11:28:42.016440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.050 [2024-12-06 11:28:42.029160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.050 [2024-12-06 11:28:42.029799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.050 [2024-12-06 11:28:42.029836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.050 [2024-12-06 11:28:42.029849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.050 [2024-12-06 11:28:42.030101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.050 [2024-12-06 11:28:42.030329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.050 [2024-12-06 11:28:42.030337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.050 [2024-12-06 11:28:42.030345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.050 [2024-12-06 11:28:42.030353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.050 [2024-12-06 11:28:42.043077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.050 [2024-12-06 11:28:42.043741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.050 [2024-12-06 11:28:42.043779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.050 [2024-12-06 11:28:42.043790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.050 [2024-12-06 11:28:42.044051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.050 [2024-12-06 11:28:42.044279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.050 [2024-12-06 11:28:42.044288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.050 [2024-12-06 11:28:42.044296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.050 [2024-12-06 11:28:42.044304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.051 [2024-12-06 11:28:42.057012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.051 [2024-12-06 11:28:42.057676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.051 [2024-12-06 11:28:42.057714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.051 [2024-12-06 11:28:42.057724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.051 [2024-12-06 11:28:42.057975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.051 [2024-12-06 11:28:42.058202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.051 [2024-12-06 11:28:42.058211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.051 [2024-12-06 11:28:42.058219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.051 [2024-12-06 11:28:42.058226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.051 [2024-12-06 11:28:42.070941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.051 [2024-12-06 11:28:42.071641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.051 [2024-12-06 11:28:42.071679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.051 [2024-12-06 11:28:42.071690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.051 [2024-12-06 11:28:42.071939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.051 [2024-12-06 11:28:42.072168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.051 [2024-12-06 11:28:42.072177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.051 [2024-12-06 11:28:42.072185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.051 [2024-12-06 11:28:42.072193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.051 [2024-12-06 11:28:42.084924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.051 [2024-12-06 11:28:42.085522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.051 [2024-12-06 11:28:42.085541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.051 [2024-12-06 11:28:42.085549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.051 [2024-12-06 11:28:42.085772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.051 [2024-12-06 11:28:42.086001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.051 [2024-12-06 11:28:42.086010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.051 [2024-12-06 11:28:42.086017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.051 [2024-12-06 11:28:42.086025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.051 [2024-12-06 11:28:42.098942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.051 [2024-12-06 11:28:42.099514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.051 [2024-12-06 11:28:42.099531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.051 [2024-12-06 11:28:42.099539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.051 [2024-12-06 11:28:42.099761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.051 [2024-12-06 11:28:42.099988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.051 [2024-12-06 11:28:42.099997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.051 [2024-12-06 11:28:42.100004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.051 [2024-12-06 11:28:42.100011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.051 [2024-12-06 11:28:42.112926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.051 [2024-12-06 11:28:42.113460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.051 [2024-12-06 11:28:42.113476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.051 [2024-12-06 11:28:42.113484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.051 [2024-12-06 11:28:42.113710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.051 [2024-12-06 11:28:42.113937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.051 [2024-12-06 11:28:42.113947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.051 [2024-12-06 11:28:42.113956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.051 [2024-12-06 11:28:42.113964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.051 [2024-12-06 11:28:42.126885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.051 [2024-12-06 11:28:42.127418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.051 [2024-12-06 11:28:42.127435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.051 [2024-12-06 11:28:42.127442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.051 [2024-12-06 11:28:42.127664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.051 [2024-12-06 11:28:42.127891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.051 [2024-12-06 11:28:42.127900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.051 [2024-12-06 11:28:42.127907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.051 [2024-12-06 11:28:42.127914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.051 [2024-12-06 11:28:42.140838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.051 [2024-12-06 11:28:42.141413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.051 [2024-12-06 11:28:42.141431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.051 [2024-12-06 11:28:42.141438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.051 [2024-12-06 11:28:42.141660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.051 [2024-12-06 11:28:42.141886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.051 [2024-12-06 11:28:42.141894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.051 [2024-12-06 11:28:42.141901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.051 [2024-12-06 11:28:42.141908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.051 [2024-12-06 11:28:42.154832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.051 [2024-12-06 11:28:42.155378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.051 [2024-12-06 11:28:42.155395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.051 [2024-12-06 11:28:42.155402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.051 [2024-12-06 11:28:42.155624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.051 [2024-12-06 11:28:42.155845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.051 [2024-12-06 11:28:42.155857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.051 [2024-12-06 11:28:42.155871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.051 [2024-12-06 11:28:42.155878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.051 [2024-12-06 11:28:42.168792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.051 [2024-12-06 11:28:42.169310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.051 [2024-12-06 11:28:42.169327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.051 [2024-12-06 11:28:42.169334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.051 [2024-12-06 11:28:42.169555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.051 [2024-12-06 11:28:42.169777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.051 [2024-12-06 11:28:42.169784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.051 [2024-12-06 11:28:42.169792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.051 [2024-12-06 11:28:42.169798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.051 [2024-12-06 11:28:42.182748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.051 [2024-12-06 11:28:42.183264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.051 [2024-12-06 11:28:42.183280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.051 [2024-12-06 11:28:42.183288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.051 [2024-12-06 11:28:42.183509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.051 [2024-12-06 11:28:42.183730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.051 [2024-12-06 11:28:42.183739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.051 [2024-12-06 11:28:42.183746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.051 [2024-12-06 11:28:42.183753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.315 [2024-12-06 11:28:42.196695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.315 [2024-12-06 11:28:42.197156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.315 [2024-12-06 11:28:42.197173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.315 [2024-12-06 11:28:42.197181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.315 [2024-12-06 11:28:42.197403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.315 [2024-12-06 11:28:42.197625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.315 [2024-12-06 11:28:42.197632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.315 [2024-12-06 11:28:42.197639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.315 [2024-12-06 11:28:42.197654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.315 [2024-12-06 11:28:42.210572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.315 [2024-12-06 11:28:42.211134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.315 [2024-12-06 11:28:42.211151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.315 [2024-12-06 11:28:42.211159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.315 [2024-12-06 11:28:42.211380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.315 [2024-12-06 11:28:42.211602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.315 [2024-12-06 11:28:42.211610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.315 [2024-12-06 11:28:42.211617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.315 [2024-12-06 11:28:42.211624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.315 [2024-12-06 11:28:42.224542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.315 [2024-12-06 11:28:42.225106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.315 [2024-12-06 11:28:42.225123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.315 [2024-12-06 11:28:42.225130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.315 [2024-12-06 11:28:42.225351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.315 [2024-12-06 11:28:42.225573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.315 [2024-12-06 11:28:42.225581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.315 [2024-12-06 11:28:42.225588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.315 [2024-12-06 11:28:42.225595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.315 [2024-12-06 11:28:42.238521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.315 [2024-12-06 11:28:42.239115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.315 [2024-12-06 11:28:42.239153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.315 [2024-12-06 11:28:42.239164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.315 [2024-12-06 11:28:42.239406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.315 [2024-12-06 11:28:42.239633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.315 [2024-12-06 11:28:42.239642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.315 [2024-12-06 11:28:42.239649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.315 [2024-12-06 11:28:42.239657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.315 [2024-12-06 11:28:42.252390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.315 [2024-12-06 11:28:42.253014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.315 [2024-12-06 11:28:42.253057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.315 [2024-12-06 11:28:42.253069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.315 [2024-12-06 11:28:42.253313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.315 [2024-12-06 11:28:42.253541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.315 [2024-12-06 11:28:42.253550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.315 [2024-12-06 11:28:42.253558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.315 [2024-12-06 11:28:42.253567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.315 [2024-12-06 11:28:42.266287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.315 [2024-12-06 11:28:42.266917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.315 [2024-12-06 11:28:42.266955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.315 [2024-12-06 11:28:42.266968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.315 [2024-12-06 11:28:42.267214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.315 [2024-12-06 11:28:42.267441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.315 [2024-12-06 11:28:42.267451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.315 [2024-12-06 11:28:42.267459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.315 [2024-12-06 11:28:42.267466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.315 [2024-12-06 11:28:42.280186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.315 [2024-12-06 11:28:42.280780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.315 [2024-12-06 11:28:42.280818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.315 [2024-12-06 11:28:42.280830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.315 [2024-12-06 11:28:42.281085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.315 [2024-12-06 11:28:42.281313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.315 [2024-12-06 11:28:42.281322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.315 [2024-12-06 11:28:42.281329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.315 [2024-12-06 11:28:42.281337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.315 [2024-12-06 11:28:42.294050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.315 [2024-12-06 11:28:42.294773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.315 [2024-12-06 11:28:42.294811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.316 [2024-12-06 11:28:42.294823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.316 [2024-12-06 11:28:42.295082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.316 [2024-12-06 11:28:42.295309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.316 [2024-12-06 11:28:42.295318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.316 [2024-12-06 11:28:42.295326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.316 [2024-12-06 11:28:42.295333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.316 [2024-12-06 11:28:42.308052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.316 [2024-12-06 11:28:42.308591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.316 [2024-12-06 11:28:42.308610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.316 [2024-12-06 11:28:42.308618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.316 [2024-12-06 11:28:42.308841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.316 [2024-12-06 11:28:42.309070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.316 [2024-12-06 11:28:42.309086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.316 [2024-12-06 11:28:42.309094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.316 [2024-12-06 11:28:42.309101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.316 [2024-12-06 11:28:42.322021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.316 [2024-12-06 11:28:42.322695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.316 [2024-12-06 11:28:42.322733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.316 [2024-12-06 11:28:42.322744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.316 [2024-12-06 11:28:42.322996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.316 [2024-12-06 11:28:42.323224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.316 [2024-12-06 11:28:42.323233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.316 [2024-12-06 11:28:42.323241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.316 [2024-12-06 11:28:42.323249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.316 [2024-12-06 11:28:42.335975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.316 [2024-12-06 11:28:42.336633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.316 [2024-12-06 11:28:42.336670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.316 [2024-12-06 11:28:42.336683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.316 [2024-12-06 11:28:42.336935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.316 [2024-12-06 11:28:42.337163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.316 [2024-12-06 11:28:42.337177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.316 [2024-12-06 11:28:42.337185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.316 [2024-12-06 11:28:42.337193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.316 [2024-12-06 11:28:42.349920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.316 [2024-12-06 11:28:42.350535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.316 [2024-12-06 11:28:42.350573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.316 [2024-12-06 11:28:42.350584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.316 [2024-12-06 11:28:42.350827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.316 [2024-12-06 11:28:42.351063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.316 [2024-12-06 11:28:42.351073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.316 [2024-12-06 11:28:42.351081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.316 [2024-12-06 11:28:42.351089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.316 [2024-12-06 11:28:42.364012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.316 [2024-12-06 11:28:42.364579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.316 [2024-12-06 11:28:42.364598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.316 [2024-12-06 11:28:42.364606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.316 [2024-12-06 11:28:42.364830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.316 [2024-12-06 11:28:42.365059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.316 [2024-12-06 11:28:42.365068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.316 [2024-12-06 11:28:42.365075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.316 [2024-12-06 11:28:42.365082] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.316 [2024-12-06 11:28:42.378004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.316 [2024-12-06 11:28:42.378550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.316 [2024-12-06 11:28:42.378567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.316 [2024-12-06 11:28:42.378574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.316 [2024-12-06 11:28:42.378796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.316 [2024-12-06 11:28:42.379023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.316 [2024-12-06 11:28:42.379032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.316 [2024-12-06 11:28:42.379039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.316 [2024-12-06 11:28:42.379046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.316 [2024-12-06 11:28:42.391971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.316 [2024-12-06 11:28:42.392631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.316 [2024-12-06 11:28:42.392669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.316 [2024-12-06 11:28:42.392680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.316 [2024-12-06 11:28:42.392931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.316 [2024-12-06 11:28:42.393158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.316 [2024-12-06 11:28:42.393167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.316 [2024-12-06 11:28:42.393175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.316 [2024-12-06 11:28:42.393183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.316 [2024-12-06 11:28:42.405982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.316 [2024-12-06 11:28:42.406584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.316 [2024-12-06 11:28:42.406604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.316 [2024-12-06 11:28:42.406612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.316 [2024-12-06 11:28:42.406835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.316 [2024-12-06 11:28:42.407062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.316 [2024-12-06 11:28:42.407071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.316 [2024-12-06 11:28:42.407078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.316 [2024-12-06 11:28:42.407085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.317 [2024-12-06 11:28:42.420003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.317 [2024-12-06 11:28:42.420662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.317 [2024-12-06 11:28:42.420699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.317 [2024-12-06 11:28:42.420710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.317 [2024-12-06 11:28:42.420961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.317 [2024-12-06 11:28:42.421188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.317 [2024-12-06 11:28:42.421197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.317 [2024-12-06 11:28:42.421205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.317 [2024-12-06 11:28:42.421213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.317 [2024-12-06 11:28:42.433941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.317 [2024-12-06 11:28:42.434528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.317 [2024-12-06 11:28:42.434552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.317 [2024-12-06 11:28:42.434560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.317 [2024-12-06 11:28:42.434783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.317 [2024-12-06 11:28:42.435010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.317 [2024-12-06 11:28:42.435020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.317 [2024-12-06 11:28:42.435027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.317 [2024-12-06 11:28:42.435033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.317 [2024-12-06 11:28:42.447956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.317 [2024-12-06 11:28:42.448499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.317 [2024-12-06 11:28:42.448516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.317 [2024-12-06 11:28:42.448524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.317 [2024-12-06 11:28:42.448745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.317 [2024-12-06 11:28:42.448972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.317 [2024-12-06 11:28:42.448982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.317 [2024-12-06 11:28:42.448989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.317 [2024-12-06 11:28:42.448995] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.317 [2024-12-06 11:28:42.461917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.317 [2024-12-06 11:28:42.462454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.317 [2024-12-06 11:28:42.462470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.317 [2024-12-06 11:28:42.462478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.317 [2024-12-06 11:28:42.462699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.317 [2024-12-06 11:28:42.462926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.317 [2024-12-06 11:28:42.462934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.317 [2024-12-06 11:28:42.462942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.317 [2024-12-06 11:28:42.462949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.317 [2024-12-06 11:28:42.475872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.317 [2024-12-06 11:28:42.476447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.317 [2024-12-06 11:28:42.476462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.317 [2024-12-06 11:28:42.476470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.317 [2024-12-06 11:28:42.476695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.317 [2024-12-06 11:28:42.476923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.317 [2024-12-06 11:28:42.476932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.317 [2024-12-06 11:28:42.476939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.317 [2024-12-06 11:28:42.476945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.581 [2024-12-06 11:28:42.489870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.581 [2024-12-06 11:28:42.490410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.581 [2024-12-06 11:28:42.490426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.581 [2024-12-06 11:28:42.490433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.581 [2024-12-06 11:28:42.490655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.581 [2024-12-06 11:28:42.490884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.581 [2024-12-06 11:28:42.490893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.581 [2024-12-06 11:28:42.490900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.581 [2024-12-06 11:28:42.490907] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.581 [2024-12-06 11:28:42.503821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.581 [2024-12-06 11:28:42.504405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.581 [2024-12-06 11:28:42.504421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.581 [2024-12-06 11:28:42.504428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.581 [2024-12-06 11:28:42.504650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.581 [2024-12-06 11:28:42.504877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.581 [2024-12-06 11:28:42.504887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.581 [2024-12-06 11:28:42.504894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.581 [2024-12-06 11:28:42.504901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.581 [2024-12-06 11:28:42.517828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.581 [2024-12-06 11:28:42.518377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.581 [2024-12-06 11:28:42.518393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.581 [2024-12-06 11:28:42.518401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.581 [2024-12-06 11:28:42.518621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.581 [2024-12-06 11:28:42.518843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.581 [2024-12-06 11:28:42.518852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.581 [2024-12-06 11:28:42.518869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.581 [2024-12-06 11:28:42.518876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.581 [2024-12-06 11:28:42.531702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.581 [2024-12-06 11:28:42.532268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.581 [2024-12-06 11:28:42.532286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.581 [2024-12-06 11:28:42.532293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.581 [2024-12-06 11:28:42.532515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.581 [2024-12-06 11:28:42.532737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.581 [2024-12-06 11:28:42.532744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.581 [2024-12-06 11:28:42.532752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.581 [2024-12-06 11:28:42.532758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.581 [2024-12-06 11:28:42.545708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.581 [2024-12-06 11:28:42.546259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.581 [2024-12-06 11:28:42.546276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.581 [2024-12-06 11:28:42.546284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.581 [2024-12-06 11:28:42.546506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.581 [2024-12-06 11:28:42.546727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.581 [2024-12-06 11:28:42.546736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.582 [2024-12-06 11:28:42.546743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.582 [2024-12-06 11:28:42.546750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.582 [2024-12-06 11:28:42.559681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.582 [2024-12-06 11:28:42.560236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.582 [2024-12-06 11:28:42.560252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.582 [2024-12-06 11:28:42.560259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.582 [2024-12-06 11:28:42.560481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.582 [2024-12-06 11:28:42.560702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.582 [2024-12-06 11:28:42.560710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.582 [2024-12-06 11:28:42.560718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.582 [2024-12-06 11:28:42.560724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.582 [2024-12-06 11:28:42.573665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.582 [2024-12-06 11:28:42.574217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.582 [2024-12-06 11:28:42.574233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.582 [2024-12-06 11:28:42.574241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.582 [2024-12-06 11:28:42.574462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.582 [2024-12-06 11:28:42.574684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.582 [2024-12-06 11:28:42.574692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.582 [2024-12-06 11:28:42.574698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.582 [2024-12-06 11:28:42.574705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.582 [2024-12-06 11:28:42.587640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.582 [2024-12-06 11:28:42.588185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.582 [2024-12-06 11:28:42.588201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.582 [2024-12-06 11:28:42.588208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.582 [2024-12-06 11:28:42.588430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.582 [2024-12-06 11:28:42.588652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.582 [2024-12-06 11:28:42.588660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.582 [2024-12-06 11:28:42.588667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.582 [2024-12-06 11:28:42.588674] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.582 [2024-12-06 11:28:42.601616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.582 [2024-12-06 11:28:42.602168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.582 [2024-12-06 11:28:42.602184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.582 [2024-12-06 11:28:42.602192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.582 [2024-12-06 11:28:42.602413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.582 [2024-12-06 11:28:42.602634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.582 [2024-12-06 11:28:42.602643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.582 [2024-12-06 11:28:42.602650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.582 [2024-12-06 11:28:42.602656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.582 [2024-12-06 11:28:42.615617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.582 [2024-12-06 11:28:42.616160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.582 [2024-12-06 11:28:42.616177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.582 [2024-12-06 11:28:42.616188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.582 [2024-12-06 11:28:42.616411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.582 [2024-12-06 11:28:42.616632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.582 [2024-12-06 11:28:42.616640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.582 [2024-12-06 11:28:42.616648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.582 [2024-12-06 11:28:42.616654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.582 [2024-12-06 11:28:42.629584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.582 [2024-12-06 11:28:42.630155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.582 [2024-12-06 11:28:42.630173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.582 [2024-12-06 11:28:42.630180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.582 [2024-12-06 11:28:42.630402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.582 [2024-12-06 11:28:42.630624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.582 [2024-12-06 11:28:42.630631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.582 [2024-12-06 11:28:42.630639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.582 [2024-12-06 11:28:42.630645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.582 [2024-12-06 11:28:42.643587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.582 [2024-12-06 11:28:42.644164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.582 [2024-12-06 11:28:42.644180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.582 [2024-12-06 11:28:42.644188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.582 [2024-12-06 11:28:42.644409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.582 [2024-12-06 11:28:42.644631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.582 [2024-12-06 11:28:42.644638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.582 [2024-12-06 11:28:42.644645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.582 [2024-12-06 11:28:42.644652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.582 [2024-12-06 11:28:42.657589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.582 [2024-12-06 11:28:42.658157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.582 [2024-12-06 11:28:42.658174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.582 [2024-12-06 11:28:42.658182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.582 [2024-12-06 11:28:42.658404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.582 [2024-12-06 11:28:42.658629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.582 [2024-12-06 11:28:42.658644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.582 [2024-12-06 11:28:42.658651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.582 [2024-12-06 11:28:42.658658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.582 [2024-12-06 11:28:42.671596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.582 [2024-12-06 11:28:42.672072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.582 [2024-12-06 11:28:42.672089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.582 [2024-12-06 11:28:42.672096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.582 [2024-12-06 11:28:42.672318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.582 [2024-12-06 11:28:42.672538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.582 [2024-12-06 11:28:42.672547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.582 [2024-12-06 11:28:42.672554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.582 [2024-12-06 11:28:42.672561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.582 [2024-12-06 11:28:42.685493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.582 [2024-12-06 11:28:42.686038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.582 [2024-12-06 11:28:42.686055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.582 [2024-12-06 11:28:42.686062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.582 [2024-12-06 11:28:42.686284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.583 [2024-12-06 11:28:42.686506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.583 [2024-12-06 11:28:42.686514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.583 [2024-12-06 11:28:42.686522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.583 [2024-12-06 11:28:42.686528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.583 [2024-12-06 11:28:42.699463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.583 [2024-12-06 11:28:42.699869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.583 [2024-12-06 11:28:42.699888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.583 [2024-12-06 11:28:42.699896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.583 [2024-12-06 11:28:42.700119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.583 [2024-12-06 11:28:42.700340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.583 [2024-12-06 11:28:42.700349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.583 [2024-12-06 11:28:42.700359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.583 [2024-12-06 11:28:42.700366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.583 [2024-12-06 11:28:42.713300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.583 [2024-12-06 11:28:42.713847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.583 [2024-12-06 11:28:42.713972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.583 [2024-12-06 11:28:42.713982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.583 [2024-12-06 11:28:42.714204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.583 [2024-12-06 11:28:42.714426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.583 [2024-12-06 11:28:42.714434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.583 [2024-12-06 11:28:42.714441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.583 [2024-12-06 11:28:42.714448] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.583 [2024-12-06 11:28:42.727170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.583 [2024-12-06 11:28:42.727699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.583 [2024-12-06 11:28:42.727715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.583 [2024-12-06 11:28:42.727722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.583 [2024-12-06 11:28:42.727950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.583 [2024-12-06 11:28:42.728172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.583 [2024-12-06 11:28:42.728181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.583 [2024-12-06 11:28:42.728188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.583 [2024-12-06 11:28:42.728194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.583 [2024-12-06 11:28:42.741134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.583 [2024-12-06 11:28:42.741702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.583 [2024-12-06 11:28:42.741718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.583 [2024-12-06 11:28:42.741726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.583 [2024-12-06 11:28:42.741952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.583 [2024-12-06 11:28:42.742174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.583 [2024-12-06 11:28:42.742182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.583 [2024-12-06 11:28:42.742189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.583 [2024-12-06 11:28:42.742195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.845 [2024-12-06 11:28:42.755139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.845 [2024-12-06 11:28:42.755679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.845 [2024-12-06 11:28:42.755696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.845 [2024-12-06 11:28:42.755703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.845 [2024-12-06 11:28:42.755932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.845 [2024-12-06 11:28:42.756154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.845 [2024-12-06 11:28:42.756162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.845 [2024-12-06 11:28:42.756169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.845 [2024-12-06 11:28:42.756176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.845 [2024-12-06 11:28:42.769109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.845 [2024-12-06 11:28:42.769641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.845 [2024-12-06 11:28:42.769679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.845 [2024-12-06 11:28:42.769692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.845 [2024-12-06 11:28:42.769948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.845 [2024-12-06 11:28:42.770177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.845 [2024-12-06 11:28:42.770186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.845 [2024-12-06 11:28:42.770194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.845 [2024-12-06 11:28:42.770203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.845 [2024-12-06 11:28:42.782984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.845 [2024-12-06 11:28:42.783528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.845 [2024-12-06 11:28:42.783547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.845 [2024-12-06 11:28:42.783555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.845 [2024-12-06 11:28:42.783778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.845 [2024-12-06 11:28:42.784006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.845 [2024-12-06 11:28:42.784015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.845 [2024-12-06 11:28:42.784022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.845 [2024-12-06 11:28:42.784029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.845 [2024-12-06 11:28:42.796962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.845 [2024-12-06 11:28:42.797484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.845 [2024-12-06 11:28:42.797501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.845 [2024-12-06 11:28:42.797514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.845 [2024-12-06 11:28:42.797736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.846 [2024-12-06 11:28:42.797964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.846 [2024-12-06 11:28:42.797974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.846 [2024-12-06 11:28:42.797981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.846 [2024-12-06 11:28:42.797987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.846 [2024-12-06 11:28:42.810922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.846 [2024-12-06 11:28:42.811455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.846 [2024-12-06 11:28:42.811471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.846 [2024-12-06 11:28:42.811479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.846 [2024-12-06 11:28:42.811700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.846 [2024-12-06 11:28:42.811930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.846 [2024-12-06 11:28:42.811939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.846 [2024-12-06 11:28:42.811946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.846 [2024-12-06 11:28:42.811952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.846 [2024-12-06 11:28:42.824914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.846 [2024-12-06 11:28:42.825460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.846 [2024-12-06 11:28:42.825477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.846 [2024-12-06 11:28:42.825485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.846 [2024-12-06 11:28:42.825706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.846 [2024-12-06 11:28:42.825933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.846 [2024-12-06 11:28:42.825950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.846 [2024-12-06 11:28:42.825957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.846 [2024-12-06 11:28:42.825964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.846 [2024-12-06 11:28:42.838904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.846 [2024-12-06 11:28:42.839509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.846 [2024-12-06 11:28:42.839547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.846 [2024-12-06 11:28:42.839558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.846 [2024-12-06 11:28:42.839800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.846 [2024-12-06 11:28:42.840041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.846 [2024-12-06 11:28:42.840051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.846 [2024-12-06 11:28:42.840059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.846 [2024-12-06 11:28:42.840067] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.846 6976.00 IOPS, 27.25 MiB/s [2024-12-06T10:28:43.013Z] [2024-12-06 11:28:42.852757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.846 [2024-12-06 11:28:42.853383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.846 [2024-12-06 11:28:42.853421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.846 [2024-12-06 11:28:42.853432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.846 [2024-12-06 11:28:42.853674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.846 [2024-12-06 11:28:42.853910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.846 [2024-12-06 11:28:42.853920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.846 [2024-12-06 11:28:42.853928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.846 [2024-12-06 11:28:42.853936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.846 [2024-12-06 11:28:42.866641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.846 [2024-12-06 11:28:42.867228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.846 [2024-12-06 11:28:42.867265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.846 [2024-12-06 11:28:42.867278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.846 [2024-12-06 11:28:42.867523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.846 [2024-12-06 11:28:42.867749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.846 [2024-12-06 11:28:42.867759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.846 [2024-12-06 11:28:42.867766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.846 [2024-12-06 11:28:42.867774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.846 [2024-12-06 11:28:42.880492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.846 [2024-12-06 11:28:42.881143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.846 [2024-12-06 11:28:42.881181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.846 [2024-12-06 11:28:42.881192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.846 [2024-12-06 11:28:42.881434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.846 [2024-12-06 11:28:42.881660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.846 [2024-12-06 11:28:42.881669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.846 [2024-12-06 11:28:42.881685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.846 [2024-12-06 11:28:42.881693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.846 [2024-12-06 11:28:42.894406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.846 [2024-12-06 11:28:42.895081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.846 [2024-12-06 11:28:42.895119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.846 [2024-12-06 11:28:42.895130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.846 [2024-12-06 11:28:42.895372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.846 [2024-12-06 11:28:42.895598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.846 [2024-12-06 11:28:42.895607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.846 [2024-12-06 11:28:42.895615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.846 [2024-12-06 11:28:42.895623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.846 [2024-12-06 11:28:42.908335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.846 [2024-12-06 11:28:42.909036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.846 [2024-12-06 11:28:42.909075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.846 [2024-12-06 11:28:42.909086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.846 [2024-12-06 11:28:42.909328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.846 [2024-12-06 11:28:42.909554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.846 [2024-12-06 11:28:42.909563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.846 [2024-12-06 11:28:42.909571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.846 [2024-12-06 11:28:42.909578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.846 [2024-12-06 11:28:42.922287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.846 [2024-12-06 11:28:42.922831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.846 [2024-12-06 11:28:42.922851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.847 [2024-12-06 11:28:42.922859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.847 [2024-12-06 11:28:42.923088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.847 [2024-12-06 11:28:42.923310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.847 [2024-12-06 11:28:42.923318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.847 [2024-12-06 11:28:42.923325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.847 [2024-12-06 11:28:42.923332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.847 [2024-12-06 11:28:42.936258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.847 [2024-12-06 11:28:42.936928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.847 [2024-12-06 11:28:42.936967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.847 [2024-12-06 11:28:42.936979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.847 [2024-12-06 11:28:42.937225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.847 [2024-12-06 11:28:42.937451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.847 [2024-12-06 11:28:42.937460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.847 [2024-12-06 11:28:42.937468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.847 [2024-12-06 11:28:42.937476] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.847 [2024-12-06 11:28:42.950208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.847 [2024-12-06 11:28:42.950800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.847 [2024-12-06 11:28:42.950820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.847 [2024-12-06 11:28:42.950827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.847 [2024-12-06 11:28:42.951057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.847 [2024-12-06 11:28:42.951279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.847 [2024-12-06 11:28:42.951288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.847 [2024-12-06 11:28:42.951295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.847 [2024-12-06 11:28:42.951302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.847 [2024-12-06 11:28:42.964205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.847 [2024-12-06 11:28:42.964754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.847 [2024-12-06 11:28:42.964770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.847 [2024-12-06 11:28:42.964778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.847 [2024-12-06 11:28:42.965005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.847 [2024-12-06 11:28:42.965228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.847 [2024-12-06 11:28:42.965236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.847 [2024-12-06 11:28:42.965243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.847 [2024-12-06 11:28:42.965249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.847 [2024-12-06 11:28:42.978150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.847 [2024-12-06 11:28:42.978681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.847 [2024-12-06 11:28:42.978697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.847 [2024-12-06 11:28:42.978709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.847 [2024-12-06 11:28:42.978937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.847 [2024-12-06 11:28:42.979160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.847 [2024-12-06 11:28:42.979169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.847 [2024-12-06 11:28:42.979176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.847 [2024-12-06 11:28:42.979182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.847 [2024-12-06 11:28:42.992087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.847 [2024-12-06 11:28:42.992652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.847 [2024-12-06 11:28:42.992669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.847 [2024-12-06 11:28:42.992676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.847 [2024-12-06 11:28:42.992903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.847 [2024-12-06 11:28:42.993125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.847 [2024-12-06 11:28:42.993133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.847 [2024-12-06 11:28:42.993140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.847 [2024-12-06 11:28:42.993147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:36.847 [2024-12-06 11:28:43.006058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:36.847 [2024-12-06 11:28:43.006581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.847 [2024-12-06 11:28:43.006597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:36.847 [2024-12-06 11:28:43.006604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:36.847 [2024-12-06 11:28:43.006826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:36.847 [2024-12-06 11:28:43.007055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:36.847 [2024-12-06 11:28:43.007064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:36.847 [2024-12-06 11:28:43.007072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:36.847 [2024-12-06 11:28:43.007079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.109 [2024-12-06 11:28:43.020007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.109 [2024-12-06 11:28:43.020543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.109 [2024-12-06 11:28:43.020559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.109 [2024-12-06 11:28:43.020567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.109 [2024-12-06 11:28:43.020788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.109 [2024-12-06 11:28:43.021019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.109 [2024-12-06 11:28:43.021028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.109 [2024-12-06 11:28:43.021035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.109 [2024-12-06 11:28:43.021042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.109 [2024-12-06 11:28:43.033993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.109 [2024-12-06 11:28:43.034443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.109 [2024-12-06 11:28:43.034459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.109 [2024-12-06 11:28:43.034467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.109 [2024-12-06 11:28:43.034689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.110 [2024-12-06 11:28:43.034917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.110 [2024-12-06 11:28:43.034927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.110 [2024-12-06 11:28:43.034934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.110 [2024-12-06 11:28:43.034940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.110 [2024-12-06 11:28:43.047855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.110 [2024-12-06 11:28:43.048389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.110 [2024-12-06 11:28:43.048405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.110 [2024-12-06 11:28:43.048413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.110 [2024-12-06 11:28:43.048634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.110 [2024-12-06 11:28:43.048856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.110 [2024-12-06 11:28:43.048870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.110 [2024-12-06 11:28:43.048878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.110 [2024-12-06 11:28:43.048884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.110 [2024-12-06 11:28:43.061786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.110 [2024-12-06 11:28:43.062277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.110 [2024-12-06 11:28:43.062294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.110 [2024-12-06 11:28:43.062301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.110 [2024-12-06 11:28:43.062523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.110 [2024-12-06 11:28:43.062744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.110 [2024-12-06 11:28:43.062752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.110 [2024-12-06 11:28:43.062763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.110 [2024-12-06 11:28:43.062770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.110 [2024-12-06 11:28:43.075678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.110 [2024-12-06 11:28:43.076213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.110 [2024-12-06 11:28:43.076229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.110 [2024-12-06 11:28:43.076236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.110 [2024-12-06 11:28:43.076457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.110 [2024-12-06 11:28:43.076679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.110 [2024-12-06 11:28:43.076687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.110 [2024-12-06 11:28:43.076695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.110 [2024-12-06 11:28:43.076701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.110 [2024-12-06 11:28:43.089605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.110 [2024-12-06 11:28:43.090133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.110 [2024-12-06 11:28:43.090150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.110 [2024-12-06 11:28:43.090158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.110 [2024-12-06 11:28:43.090379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.110 [2024-12-06 11:28:43.090600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.110 [2024-12-06 11:28:43.090607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.110 [2024-12-06 11:28:43.090614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.110 [2024-12-06 11:28:43.090621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.110 [2024-12-06 11:28:43.103533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.110 [2024-12-06 11:28:43.103941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.110 [2024-12-06 11:28:43.103959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.110 [2024-12-06 11:28:43.103967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.110 [2024-12-06 11:28:43.104189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.110 [2024-12-06 11:28:43.104411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.110 [2024-12-06 11:28:43.104418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.110 [2024-12-06 11:28:43.104425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.110 [2024-12-06 11:28:43.104432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.110 [2024-12-06 11:28:43.117551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.110 [2024-12-06 11:28:43.118216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.110 [2024-12-06 11:28:43.118255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.110 [2024-12-06 11:28:43.118266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.110 [2024-12-06 11:28:43.118508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.110 [2024-12-06 11:28:43.118735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.110 [2024-12-06 11:28:43.118744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.110 [2024-12-06 11:28:43.118752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.110 [2024-12-06 11:28:43.118759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.110 [2024-12-06 11:28:43.131482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.110 [2024-12-06 11:28:43.132182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.110 [2024-12-06 11:28:43.132220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.110 [2024-12-06 11:28:43.132231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.110 [2024-12-06 11:28:43.132473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.110 [2024-12-06 11:28:43.132699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.110 [2024-12-06 11:28:43.132708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.110 [2024-12-06 11:28:43.132716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.110 [2024-12-06 11:28:43.132724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.110 [2024-12-06 11:28:43.145453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.110 [2024-12-06 11:28:43.146047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.110 [2024-12-06 11:28:43.146084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.110 [2024-12-06 11:28:43.146095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.110 [2024-12-06 11:28:43.146337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.110 [2024-12-06 11:28:43.146565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.110 [2024-12-06 11:28:43.146573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.110 [2024-12-06 11:28:43.146582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.110 [2024-12-06 11:28:43.146589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.110 [2024-12-06 11:28:43.159315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.110 [2024-12-06 11:28:43.159976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.110 [2024-12-06 11:28:43.160014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.110 [2024-12-06 11:28:43.160030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.110 [2024-12-06 11:28:43.160271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.110 [2024-12-06 11:28:43.160498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.110 [2024-12-06 11:28:43.160506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.110 [2024-12-06 11:28:43.160514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.110 [2024-12-06 11:28:43.160522] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.110 [2024-12-06 11:28:43.173234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.111 [2024-12-06 11:28:43.173919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.111 [2024-12-06 11:28:43.173957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.111 [2024-12-06 11:28:43.173969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.111 [2024-12-06 11:28:43.174213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.111 [2024-12-06 11:28:43.174439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.111 [2024-12-06 11:28:43.174448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.111 [2024-12-06 11:28:43.174456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.111 [2024-12-06 11:28:43.174464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.111 [2024-12-06 11:28:43.187175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.111 [2024-12-06 11:28:43.187830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.111 [2024-12-06 11:28:43.187875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.111 [2024-12-06 11:28:43.187887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.111 [2024-12-06 11:28:43.188129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.111 [2024-12-06 11:28:43.188356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.111 [2024-12-06 11:28:43.188365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.111 [2024-12-06 11:28:43.188372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.111 [2024-12-06 11:28:43.188380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.111 [2024-12-06 11:28:43.201094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.111 [2024-12-06 11:28:43.201691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.111 [2024-12-06 11:28:43.201709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.111 [2024-12-06 11:28:43.201717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.111 [2024-12-06 11:28:43.201947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.111 [2024-12-06 11:28:43.202174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.111 [2024-12-06 11:28:43.202182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.111 [2024-12-06 11:28:43.202189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.111 [2024-12-06 11:28:43.202196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.111 [2024-12-06 11:28:43.215101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.111 [2024-12-06 11:28:43.215678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.111 [2024-12-06 11:28:43.215695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.111 [2024-12-06 11:28:43.215702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.111 [2024-12-06 11:28:43.215929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.111 [2024-12-06 11:28:43.216152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.111 [2024-12-06 11:28:43.216159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.111 [2024-12-06 11:28:43.216166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.111 [2024-12-06 11:28:43.216173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.111 [2024-12-06 11:28:43.229076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.111 [2024-12-06 11:28:43.229709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.111 [2024-12-06 11:28:43.229747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.111 [2024-12-06 11:28:43.229758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.111 [2024-12-06 11:28:43.230010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.111 [2024-12-06 11:28:43.230237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.111 [2024-12-06 11:28:43.230246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.111 [2024-12-06 11:28:43.230254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.111 [2024-12-06 11:28:43.230261] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.111 [2024-12-06 11:28:43.243013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.111 [2024-12-06 11:28:43.243616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.111 [2024-12-06 11:28:43.243636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.111 [2024-12-06 11:28:43.243644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.111 [2024-12-06 11:28:43.243874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.111 [2024-12-06 11:28:43.244097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.111 [2024-12-06 11:28:43.244105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.111 [2024-12-06 11:28:43.244113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.111 [2024-12-06 11:28:43.244124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.111 [2024-12-06 11:28:43.257049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.111 [2024-12-06 11:28:43.257663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.111 [2024-12-06 11:28:43.257701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.111 [2024-12-06 11:28:43.257712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.111 [2024-12-06 11:28:43.257968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.111 [2024-12-06 11:28:43.258196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.111 [2024-12-06 11:28:43.258206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.111 [2024-12-06 11:28:43.258215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.111 [2024-12-06 11:28:43.258223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.111 [2024-12-06 11:28:43.270954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.111 [2024-12-06 11:28:43.271619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.111 [2024-12-06 11:28:43.271656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.111 [2024-12-06 11:28:43.271668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.111 [2024-12-06 11:28:43.271920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.111 [2024-12-06 11:28:43.272148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.111 [2024-12-06 11:28:43.272157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.111 [2024-12-06 11:28:43.272165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.111 [2024-12-06 11:28:43.272173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.374 [2024-12-06 11:28:43.284895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.374 [2024-12-06 11:28:43.285525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.374 [2024-12-06 11:28:43.285563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.374 [2024-12-06 11:28:43.285574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.374 [2024-12-06 11:28:43.285816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.374 [2024-12-06 11:28:43.286051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.374 [2024-12-06 11:28:43.286061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.374 [2024-12-06 11:28:43.286069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.374 [2024-12-06 11:28:43.286077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.374 [2024-12-06 11:28:43.298802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.374 [2024-12-06 11:28:43.299505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.374 [2024-12-06 11:28:43.299544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.374 [2024-12-06 11:28:43.299554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.374 [2024-12-06 11:28:43.299797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.374 [2024-12-06 11:28:43.300034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.374 [2024-12-06 11:28:43.300044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.374 [2024-12-06 11:28:43.300052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.374 [2024-12-06 11:28:43.300060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.374 [2024-12-06 11:28:43.312780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.374 [2024-12-06 11:28:43.313467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.374 [2024-12-06 11:28:43.313505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.374 [2024-12-06 11:28:43.313516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.374 [2024-12-06 11:28:43.313757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.374 [2024-12-06 11:28:43.313993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.374 [2024-12-06 11:28:43.314003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.374 [2024-12-06 11:28:43.314010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.374 [2024-12-06 11:28:43.314018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.374 [2024-12-06 11:28:43.326718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.374 [2024-12-06 11:28:43.327233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.374 [2024-12-06 11:28:43.327253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.374 [2024-12-06 11:28:43.327261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.374 [2024-12-06 11:28:43.327484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.374 [2024-12-06 11:28:43.327706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.374 [2024-12-06 11:28:43.327714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.374 [2024-12-06 11:28:43.327721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.374 [2024-12-06 11:28:43.327727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.374 [2024-12-06 11:28:43.340648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.374 [2024-12-06 11:28:43.341171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.374 [2024-12-06 11:28:43.341188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.374 [2024-12-06 11:28:43.341200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.374 [2024-12-06 11:28:43.341421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.374 [2024-12-06 11:28:43.341642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.374 [2024-12-06 11:28:43.341650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.374 [2024-12-06 11:28:43.341658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.374 [2024-12-06 11:28:43.341664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.374 [2024-12-06 11:28:43.354613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.374 [2024-12-06 11:28:43.355101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.374 [2024-12-06 11:28:43.355118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.374 [2024-12-06 11:28:43.355126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.374 [2024-12-06 11:28:43.355348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.374 [2024-12-06 11:28:43.355569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.374 [2024-12-06 11:28:43.355576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.374 [2024-12-06 11:28:43.355584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.374 [2024-12-06 11:28:43.355590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.374 [2024-12-06 11:28:43.368483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.374 [2024-12-06 11:28:43.369061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.374 [2024-12-06 11:28:43.369079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.374 [2024-12-06 11:28:43.369087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.374 [2024-12-06 11:28:43.369309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.374 [2024-12-06 11:28:43.369531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.374 [2024-12-06 11:28:43.369539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.374 [2024-12-06 11:28:43.369546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.374 [2024-12-06 11:28:43.369552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.374 [2024-12-06 11:28:43.382457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.374 [2024-12-06 11:28:43.382990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.374 [2024-12-06 11:28:43.383007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.374 [2024-12-06 11:28:43.383015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.374 [2024-12-06 11:28:43.383237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.374 [2024-12-06 11:28:43.383458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.374 [2024-12-06 11:28:43.383470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.374 [2024-12-06 11:28:43.383477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.374 [2024-12-06 11:28:43.383484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.374 [2024-12-06 11:28:43.396389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.375 [2024-12-06 11:28:43.397089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.375 [2024-12-06 11:28:43.397126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.375 [2024-12-06 11:28:43.397137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.375 [2024-12-06 11:28:43.397379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.375 [2024-12-06 11:28:43.397606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.375 [2024-12-06 11:28:43.397615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.375 [2024-12-06 11:28:43.397623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.375 [2024-12-06 11:28:43.397631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.375 [2024-12-06 11:28:43.410340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.375 [2024-12-06 11:28:43.411045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.375 [2024-12-06 11:28:43.411083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.375 [2024-12-06 11:28:43.411094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.375 [2024-12-06 11:28:43.411336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.375 [2024-12-06 11:28:43.411562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.375 [2024-12-06 11:28:43.411571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.375 [2024-12-06 11:28:43.411579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.375 [2024-12-06 11:28:43.411587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.375 [2024-12-06 11:28:43.424299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.375 [2024-12-06 11:28:43.424978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.375 [2024-12-06 11:28:43.425016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.375 [2024-12-06 11:28:43.425028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.375 [2024-12-06 11:28:43.425274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.375 [2024-12-06 11:28:43.425501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.375 [2024-12-06 11:28:43.425510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.375 [2024-12-06 11:28:43.425517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.375 [2024-12-06 11:28:43.425529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.375 [2024-12-06 11:28:43.438254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.375 [2024-12-06 11:28:43.438695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.375 [2024-12-06 11:28:43.438715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.375 [2024-12-06 11:28:43.438724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.375 [2024-12-06 11:28:43.438956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.375 [2024-12-06 11:28:43.439179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.375 [2024-12-06 11:28:43.439188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.375 [2024-12-06 11:28:43.439195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.375 [2024-12-06 11:28:43.439202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.375 [2024-12-06 11:28:43.452159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.375 [2024-12-06 11:28:43.452728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.375 [2024-12-06 11:28:43.452766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.375 [2024-12-06 11:28:43.452779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.375 [2024-12-06 11:28:43.453034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.375 [2024-12-06 11:28:43.453262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.375 [2024-12-06 11:28:43.453271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.375 [2024-12-06 11:28:43.453278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.375 [2024-12-06 11:28:43.453286] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.375 [2024-12-06 11:28:43.466204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.375 [2024-12-06 11:28:43.466912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.375 [2024-12-06 11:28:43.466950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.375 [2024-12-06 11:28:43.466962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.375 [2024-12-06 11:28:43.467206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.375 [2024-12-06 11:28:43.467432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.375 [2024-12-06 11:28:43.467441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.375 [2024-12-06 11:28:43.467450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.375 [2024-12-06 11:28:43.467458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.375 [2024-12-06 11:28:43.480172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.375 [2024-12-06 11:28:43.480820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.375 [2024-12-06 11:28:43.480858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.375 [2024-12-06 11:28:43.480876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.375 [2024-12-06 11:28:43.481119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.375 [2024-12-06 11:28:43.481345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.375 [2024-12-06 11:28:43.481354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.375 [2024-12-06 11:28:43.481362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.375 [2024-12-06 11:28:43.481370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.375 [2024-12-06 11:28:43.494079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.375 [2024-12-06 11:28:43.494759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.375 [2024-12-06 11:28:43.494798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.375 [2024-12-06 11:28:43.494808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.375 [2024-12-06 11:28:43.495058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.375 [2024-12-06 11:28:43.495285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.375 [2024-12-06 11:28:43.495294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.375 [2024-12-06 11:28:43.495302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.375 [2024-12-06 11:28:43.495309] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.375 [2024-12-06 11:28:43.508022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.375 [2024-12-06 11:28:43.508565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.375 [2024-12-06 11:28:43.508584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.375 [2024-12-06 11:28:43.508592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.375 [2024-12-06 11:28:43.508815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.375 [2024-12-06 11:28:43.509045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.375 [2024-12-06 11:28:43.509055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.375 [2024-12-06 11:28:43.509063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.375 [2024-12-06 11:28:43.509071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.375 [2024-12-06 11:28:43.521983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.375 [2024-12-06 11:28:43.522567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.375 [2024-12-06 11:28:43.522605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.375 [2024-12-06 11:28:43.522616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.375 [2024-12-06 11:28:43.522870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.375 [2024-12-06 11:28:43.523100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.375 [2024-12-06 11:28:43.523109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.375 [2024-12-06 11:28:43.523118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.375 [2024-12-06 11:28:43.523126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.376 [2024-12-06 11:28:43.535870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.376 [2024-12-06 11:28:43.536513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.376 [2024-12-06 11:28:43.536550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.376 [2024-12-06 11:28:43.536561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.376 [2024-12-06 11:28:43.536803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.376 [2024-12-06 11:28:43.537040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.376 [2024-12-06 11:28:43.537050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.376 [2024-12-06 11:28:43.537058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.376 [2024-12-06 11:28:43.537066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.637 [2024-12-06 11:28:43.549806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.637 [2024-12-06 11:28:43.550436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.637 [2024-12-06 11:28:43.550457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.637 [2024-12-06 11:28:43.550466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.637 [2024-12-06 11:28:43.550689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.637 [2024-12-06 11:28:43.550919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.637 [2024-12-06 11:28:43.550928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.637 [2024-12-06 11:28:43.550935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.637 [2024-12-06 11:28:43.550942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.637 [2024-12-06 11:28:43.563669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.637 [2024-12-06 11:28:43.564302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.637 [2024-12-06 11:28:43.564340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.637 [2024-12-06 11:28:43.564350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.637 [2024-12-06 11:28:43.564593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.637 [2024-12-06 11:28:43.564819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.637 [2024-12-06 11:28:43.564837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.637 [2024-12-06 11:28:43.564845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.637 [2024-12-06 11:28:43.564852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.637 [2024-12-06 11:28:43.577580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.637 [2024-12-06 11:28:43.578069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.637 [2024-12-06 11:28:43.578090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.637 [2024-12-06 11:28:43.578098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.637 [2024-12-06 11:28:43.578322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.637 [2024-12-06 11:28:43.578544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.637 [2024-12-06 11:28:43.578553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.637 [2024-12-06 11:28:43.578560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.637 [2024-12-06 11:28:43.578567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.637 [2024-12-06 11:28:43.591480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.637 [2024-12-06 11:28:43.592024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.637 [2024-12-06 11:28:43.592041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.637 [2024-12-06 11:28:43.592048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.637 [2024-12-06 11:28:43.592270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.637 [2024-12-06 11:28:43.592492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.637 [2024-12-06 11:28:43.592500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.637 [2024-12-06 11:28:43.592507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.637 [2024-12-06 11:28:43.592514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.638 [2024-12-06 11:28:43.605423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.638 [2024-12-06 11:28:43.606026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.638 [2024-12-06 11:28:43.606042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.638 [2024-12-06 11:28:43.606050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.638 [2024-12-06 11:28:43.606272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.638 [2024-12-06 11:28:43.606493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.638 [2024-12-06 11:28:43.606501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.638 [2024-12-06 11:28:43.606508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.638 [2024-12-06 11:28:43.606518] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.638 [2024-12-06 11:28:43.619430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.638 [2024-12-06 11:28:43.620097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.638 [2024-12-06 11:28:43.620135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.638 [2024-12-06 11:28:43.620147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.638 [2024-12-06 11:28:43.620389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.638 [2024-12-06 11:28:43.620616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.638 [2024-12-06 11:28:43.620625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.638 [2024-12-06 11:28:43.620633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.638 [2024-12-06 11:28:43.620641] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.638 [2024-12-06 11:28:43.633363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.638 [2024-12-06 11:28:43.633953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.638 [2024-12-06 11:28:43.633991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.638 [2024-12-06 11:28:43.634003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.638 [2024-12-06 11:28:43.634246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.638 [2024-12-06 11:28:43.634473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.638 [2024-12-06 11:28:43.634482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.638 [2024-12-06 11:28:43.634490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.638 [2024-12-06 11:28:43.634498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.638 [2024-12-06 11:28:43.647218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.638 [2024-12-06 11:28:43.647803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.638 [2024-12-06 11:28:43.647822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.638 [2024-12-06 11:28:43.647830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.638 [2024-12-06 11:28:43.648059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.638 [2024-12-06 11:28:43.648282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.638 [2024-12-06 11:28:43.648290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.638 [2024-12-06 11:28:43.648297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.638 [2024-12-06 11:28:43.648303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.638 [2024-12-06 11:28:43.661259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.638 [2024-12-06 11:28:43.661806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.638 [2024-12-06 11:28:43.661847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.638 [2024-12-06 11:28:43.661858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.638 [2024-12-06 11:28:43.662109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.638 [2024-12-06 11:28:43.662336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.638 [2024-12-06 11:28:43.662345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.638 [2024-12-06 11:28:43.662352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.638 [2024-12-06 11:28:43.662360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.638 [2024-12-06 11:28:43.675333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.638 [2024-12-06 11:28:43.676004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.638 [2024-12-06 11:28:43.676041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.638 [2024-12-06 11:28:43.676054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.638 [2024-12-06 11:28:43.676297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.638 [2024-12-06 11:28:43.676523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.638 [2024-12-06 11:28:43.676533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.638 [2024-12-06 11:28:43.676541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.638 [2024-12-06 11:28:43.676549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.638 [2024-12-06 11:28:43.689278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.638 [2024-12-06 11:28:43.689980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.638 [2024-12-06 11:28:43.690018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.638 [2024-12-06 11:28:43.690030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.638 [2024-12-06 11:28:43.690276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.638 [2024-12-06 11:28:43.690502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.638 [2024-12-06 11:28:43.690510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.638 [2024-12-06 11:28:43.690518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.638 [2024-12-06 11:28:43.690526] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.638 [2024-12-06 11:28:43.703241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.638 [2024-12-06 11:28:43.703916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.638 [2024-12-06 11:28:43.703954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.638 [2024-12-06 11:28:43.703966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.638 [2024-12-06 11:28:43.704215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.638 [2024-12-06 11:28:43.704441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.639 [2024-12-06 11:28:43.704450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.639 [2024-12-06 11:28:43.704458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.639 [2024-12-06 11:28:43.704465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.639 [2024-12-06 11:28:43.717185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.639 [2024-12-06 11:28:43.717890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.639 [2024-12-06 11:28:43.717929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.639 [2024-12-06 11:28:43.717940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.639 [2024-12-06 11:28:43.718181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.639 [2024-12-06 11:28:43.718408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.639 [2024-12-06 11:28:43.718417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.639 [2024-12-06 11:28:43.718425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.639 [2024-12-06 11:28:43.718432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.639 [2024-12-06 11:28:43.731149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.639 [2024-12-06 11:28:43.731813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.639 [2024-12-06 11:28:43.731851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.639 [2024-12-06 11:28:43.731870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.639 [2024-12-06 11:28:43.732112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.639 [2024-12-06 11:28:43.732339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.639 [2024-12-06 11:28:43.732348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.639 [2024-12-06 11:28:43.732356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.639 [2024-12-06 11:28:43.732364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.639 [2024-12-06 11:28:43.745135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.639 [2024-12-06 11:28:43.745775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.639 [2024-12-06 11:28:43.745813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.639 [2024-12-06 11:28:43.745825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.639 [2024-12-06 11:28:43.746076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.639 [2024-12-06 11:28:43.746304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.639 [2024-12-06 11:28:43.746317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.639 [2024-12-06 11:28:43.746325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.639 [2024-12-06 11:28:43.746333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.639 [2024-12-06 11:28:43.759055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.639 [2024-12-06 11:28:43.759732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.639 [2024-12-06 11:28:43.759771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.639 [2024-12-06 11:28:43.759781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.639 [2024-12-06 11:28:43.760032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.639 [2024-12-06 11:28:43.760260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.639 [2024-12-06 11:28:43.760269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.639 [2024-12-06 11:28:43.760278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.639 [2024-12-06 11:28:43.760287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.639 [2024-12-06 11:28:43.773006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.639 [2024-12-06 11:28:43.773597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.639 [2024-12-06 11:28:43.773616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.639 [2024-12-06 11:28:43.773624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.639 [2024-12-06 11:28:43.773847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.639 [2024-12-06 11:28:43.774075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.639 [2024-12-06 11:28:43.774085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.639 [2024-12-06 11:28:43.774093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.639 [2024-12-06 11:28:43.774101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.639 [2024-12-06 11:28:43.787017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.639 [2024-12-06 11:28:43.787586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.639 [2024-12-06 11:28:43.787602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.639 [2024-12-06 11:28:43.787610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.639 [2024-12-06 11:28:43.787831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.639 [2024-12-06 11:28:43.788057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.639 [2024-12-06 11:28:43.788066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.639 [2024-12-06 11:28:43.788073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.639 [2024-12-06 11:28:43.788080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.639 [2024-12-06 11:28:43.800998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.639 [2024-12-06 11:28:43.801569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.639 [2024-12-06 11:28:43.801585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.639 [2024-12-06 11:28:43.801592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.639 [2024-12-06 11:28:43.801813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.639 [2024-12-06 11:28:43.802041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.639 [2024-12-06 11:28:43.802050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.639 [2024-12-06 11:28:43.802057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.639 [2024-12-06 11:28:43.802064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.926 [2024-12-06 11:28:43.815008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.926 [2024-12-06 11:28:43.815647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.926 [2024-12-06 11:28:43.815685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.926 [2024-12-06 11:28:43.815697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.926 [2024-12-06 11:28:43.815947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.926 [2024-12-06 11:28:43.816175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.926 [2024-12-06 11:28:43.816184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.926 [2024-12-06 11:28:43.816192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.926 [2024-12-06 11:28:43.816200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.926 [2024-12-06 11:28:43.828912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.926 [2024-12-06 11:28:43.829591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.926 [2024-12-06 11:28:43.829630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.926 [2024-12-06 11:28:43.829640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.926 [2024-12-06 11:28:43.829890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.926 [2024-12-06 11:28:43.830117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.926 [2024-12-06 11:28:43.830126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.926 [2024-12-06 11:28:43.830134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.926 [2024-12-06 11:28:43.830142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.926 [2024-12-06 11:28:43.842870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.926 [2024-12-06 11:28:43.843347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.926 [2024-12-06 11:28:43.843371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.926 [2024-12-06 11:28:43.843379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.926 [2024-12-06 11:28:43.843603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.926 [2024-12-06 11:28:43.843825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.926 [2024-12-06 11:28:43.843833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.926 [2024-12-06 11:28:43.843840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.927 [2024-12-06 11:28:43.843847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.927 5580.80 IOPS, 21.80 MiB/s [2024-12-06T10:28:44.094Z] [2024-12-06 11:28:43.856728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.927 [2024-12-06 11:28:43.857277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.927 [2024-12-06 11:28:43.857295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.927 [2024-12-06 11:28:43.857303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.927 [2024-12-06 11:28:43.857525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.927 [2024-12-06 11:28:43.857746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.927 [2024-12-06 11:28:43.857754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.927 [2024-12-06 11:28:43.857761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.927 [2024-12-06 11:28:43.857768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.927 [2024-12-06 11:28:43.870709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.927 [2024-12-06 11:28:43.871263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.927 [2024-12-06 11:28:43.871281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.927 [2024-12-06 11:28:43.871289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.927 [2024-12-06 11:28:43.871510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.927 [2024-12-06 11:28:43.871732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.927 [2024-12-06 11:28:43.871741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.927 [2024-12-06 11:28:43.871748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.927 [2024-12-06 11:28:43.871755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.927 [2024-12-06 11:28:43.884674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.927 [2024-12-06 11:28:43.885273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.927 [2024-12-06 11:28:43.885312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.927 [2024-12-06 11:28:43.885323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.927 [2024-12-06 11:28:43.885570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.927 [2024-12-06 11:28:43.885796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.927 [2024-12-06 11:28:43.885805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.927 [2024-12-06 11:28:43.885812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.927 [2024-12-06 11:28:43.885820] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.927 [2024-12-06 11:28:43.898540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.927 [2024-12-06 11:28:43.899104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.927 [2024-12-06 11:28:43.899124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.927 [2024-12-06 11:28:43.899132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.927 [2024-12-06 11:28:43.899355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.927 [2024-12-06 11:28:43.899577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.927 [2024-12-06 11:28:43.899585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.927 [2024-12-06 11:28:43.899593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.927 [2024-12-06 11:28:43.899599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.927 [2024-12-06 11:28:43.912519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.927 [2024-12-06 11:28:43.913038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.927 [2024-12-06 11:28:43.913055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.927 [2024-12-06 11:28:43.913062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.927 [2024-12-06 11:28:43.913284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.927 [2024-12-06 11:28:43.913506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.927 [2024-12-06 11:28:43.913514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.927 [2024-12-06 11:28:43.913521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.927 [2024-12-06 11:28:43.913528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.927 [2024-12-06 11:28:43.926461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.927 [2024-12-06 11:28:43.927152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.927 [2024-12-06 11:28:43.927189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.927 [2024-12-06 11:28:43.927200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.927 [2024-12-06 11:28:43.927442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.927 [2024-12-06 11:28:43.927668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.927 [2024-12-06 11:28:43.927682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.927 [2024-12-06 11:28:43.927690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.927 [2024-12-06 11:28:43.927698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.927 [2024-12-06 11:28:43.940437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.927 [2024-12-06 11:28:43.941145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.927 [2024-12-06 11:28:43.941183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.927 [2024-12-06 11:28:43.941194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.927 [2024-12-06 11:28:43.941436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.927 [2024-12-06 11:28:43.941662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.927 [2024-12-06 11:28:43.941671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.927 [2024-12-06 11:28:43.941679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.927 [2024-12-06 11:28:43.941687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.927 [2024-12-06 11:28:43.954424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.927 [2024-12-06 11:28:43.955005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.927 [2024-12-06 11:28:43.955043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.927 [2024-12-06 11:28:43.955055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.927 [2024-12-06 11:28:43.955300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.927 [2024-12-06 11:28:43.955526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.927 [2024-12-06 11:28:43.955536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.927 [2024-12-06 11:28:43.955544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.927 [2024-12-06 11:28:43.955552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.927 [2024-12-06 11:28:43.968272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.927 [2024-12-06 11:28:43.968958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.927 [2024-12-06 11:28:43.968997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.927 [2024-12-06 11:28:43.969009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.927 [2024-12-06 11:28:43.969252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.927 [2024-12-06 11:28:43.969479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.927 [2024-12-06 11:28:43.969488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.927 [2024-12-06 11:28:43.969496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.927 [2024-12-06 11:28:43.969503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.927 [2024-12-06 11:28:43.982235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.927 [2024-12-06 11:28:43.982789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.927 [2024-12-06 11:28:43.982808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.927 [2024-12-06 11:28:43.982817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.927 [2024-12-06 11:28:43.983045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.928 [2024-12-06 11:28:43.983268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.928 [2024-12-06 11:28:43.983276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.928 [2024-12-06 11:28:43.983283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.928 [2024-12-06 11:28:43.983290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.928 [2024-12-06 11:28:43.996209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.928 [2024-12-06 11:28:43.996875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.928 [2024-12-06 11:28:43.996913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.928 [2024-12-06 11:28:43.996924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.928 [2024-12-06 11:28:43.997166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.928 [2024-12-06 11:28:43.997392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.928 [2024-12-06 11:28:43.997401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.928 [2024-12-06 11:28:43.997409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.928 [2024-12-06 11:28:43.997416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.928 [2024-12-06 11:28:44.010132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.928 [2024-12-06 11:28:44.010706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.928 [2024-12-06 11:28:44.010725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.928 [2024-12-06 11:28:44.010733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.928 [2024-12-06 11:28:44.010961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.928 [2024-12-06 11:28:44.011185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.928 [2024-12-06 11:28:44.011194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.928 [2024-12-06 11:28:44.011202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.928 [2024-12-06 11:28:44.011210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.928 [2024-12-06 11:28:44.024130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.928 [2024-12-06 11:28:44.024794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.928 [2024-12-06 11:28:44.024837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.928 [2024-12-06 11:28:44.024850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.928 [2024-12-06 11:28:44.025100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.928 [2024-12-06 11:28:44.025328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.928 [2024-12-06 11:28:44.025337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.928 [2024-12-06 11:28:44.025346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.928 [2024-12-06 11:28:44.025355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.928 [2024-12-06 11:28:44.038082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.928 [2024-12-06 11:28:44.038656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.928 [2024-12-06 11:28:44.038694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.928 [2024-12-06 11:28:44.038704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.928 [2024-12-06 11:28:44.038955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.928 [2024-12-06 11:28:44.039183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.928 [2024-12-06 11:28:44.039192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.928 [2024-12-06 11:28:44.039199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.928 [2024-12-06 11:28:44.039207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.928 [2024-12-06 11:28:44.052141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:37.928 [2024-12-06 11:28:44.052696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.928 [2024-12-06 11:28:44.052715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:37.928 [2024-12-06 11:28:44.052723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:37.928 [2024-12-06 11:28:44.052951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:37.928 [2024-12-06 11:28:44.053175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:37.928 [2024-12-06 11:28:44.053183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:37.928 [2024-12-06 11:28:44.053190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:37.928 [2024-12-06 11:28:44.053196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:37.928 [2024-12-06 11:28:44.066112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.191 [2024-12-06 11:28:44.066649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.191 [2024-12-06 11:28:44.066667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.191 [2024-12-06 11:28:44.066676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.191 [2024-12-06 11:28:44.066934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.191 [2024-12-06 11:28:44.067160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.191 [2024-12-06 11:28:44.067168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.191 [2024-12-06 11:28:44.067176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.191 [2024-12-06 11:28:44.067182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.191 [2024-12-06 11:28:44.080100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.191 [2024-12-06 11:28:44.080751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.191 [2024-12-06 11:28:44.080789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.191 [2024-12-06 11:28:44.080800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.191 [2024-12-06 11:28:44.081050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.191 [2024-12-06 11:28:44.081277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.191 [2024-12-06 11:28:44.081286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.191 [2024-12-06 11:28:44.081294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.191 [2024-12-06 11:28:44.081302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.191 [2024-12-06 11:28:44.094018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.191 [2024-12-06 11:28:44.094671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.191 [2024-12-06 11:28:44.094709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.191 [2024-12-06 11:28:44.094719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.191 [2024-12-06 11:28:44.094969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.191 [2024-12-06 11:28:44.095197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.191 [2024-12-06 11:28:44.095206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.191 [2024-12-06 11:28:44.095214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.191 [2024-12-06 11:28:44.095222] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.191 [2024-12-06 11:28:44.107944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.191 [2024-12-06 11:28:44.108604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.191 [2024-12-06 11:28:44.108643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.191 [2024-12-06 11:28:44.108654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.191 [2024-12-06 11:28:44.108903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.191 [2024-12-06 11:28:44.109131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.191 [2024-12-06 11:28:44.109140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.191 [2024-12-06 11:28:44.109153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.191 [2024-12-06 11:28:44.109161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.191 [2024-12-06 11:28:44.121880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.191 [2024-12-06 11:28:44.122437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.191 [2024-12-06 11:28:44.122455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.191 [2024-12-06 11:28:44.122463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.191 [2024-12-06 11:28:44.122686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.191 [2024-12-06 11:28:44.122915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.191 [2024-12-06 11:28:44.122925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.191 [2024-12-06 11:28:44.122932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.191 [2024-12-06 11:28:44.122939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.191 [2024-12-06 11:28:44.135871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.191 [2024-12-06 11:28:44.136520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.191 [2024-12-06 11:28:44.136558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.191 [2024-12-06 11:28:44.136569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.191 [2024-12-06 11:28:44.136811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.191 [2024-12-06 11:28:44.137046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.191 [2024-12-06 11:28:44.137056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.191 [2024-12-06 11:28:44.137064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.191 [2024-12-06 11:28:44.137072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.191 [2024-12-06 11:28:44.149788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.191 [2024-12-06 11:28:44.150468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.191 [2024-12-06 11:28:44.150506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.191 [2024-12-06 11:28:44.150517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.191 [2024-12-06 11:28:44.150759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.191 [2024-12-06 11:28:44.151004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.191 [2024-12-06 11:28:44.151015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.191 [2024-12-06 11:28:44.151023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.191 [2024-12-06 11:28:44.151031] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.191 [2024-12-06 11:28:44.163748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.192 [2024-12-06 11:28:44.164299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-12-06 11:28:44.164319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-12-06 11:28:44.164327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.192 [2024-12-06 11:28:44.164549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.192 [2024-12-06 11:28:44.164771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.192 [2024-12-06 11:28:44.164780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.192 [2024-12-06 11:28:44.164787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.192 [2024-12-06 11:28:44.164794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.192 [2024-12-06 11:28:44.177717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.192 [2024-12-06 11:28:44.178228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-12-06 11:28:44.178245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-12-06 11:28:44.178253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.192 [2024-12-06 11:28:44.178475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.192 [2024-12-06 11:28:44.178696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.192 [2024-12-06 11:28:44.178704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.192 [2024-12-06 11:28:44.178711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.192 [2024-12-06 11:28:44.178718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.192 [2024-12-06 11:28:44.191635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.192 [2024-12-06 11:28:44.192143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-12-06 11:28:44.192160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-12-06 11:28:44.192167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.192 [2024-12-06 11:28:44.192388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.192 [2024-12-06 11:28:44.192610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.192 [2024-12-06 11:28:44.192624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.192 [2024-12-06 11:28:44.192631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.192 [2024-12-06 11:28:44.192638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.192 [2024-12-06 11:28:44.205552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.192 [2024-12-06 11:28:44.206083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-12-06 11:28:44.206100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-12-06 11:28:44.206111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.192 [2024-12-06 11:28:44.206333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.192 [2024-12-06 11:28:44.206554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.192 [2024-12-06 11:28:44.206562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.192 [2024-12-06 11:28:44.206569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.192 [2024-12-06 11:28:44.206575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.192 [2024-12-06 11:28:44.219488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.192 [2024-12-06 11:28:44.220114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-12-06 11:28:44.220152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-12-06 11:28:44.220163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.192 [2024-12-06 11:28:44.220405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.192 [2024-12-06 11:28:44.220631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.192 [2024-12-06 11:28:44.220641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.192 [2024-12-06 11:28:44.220648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.192 [2024-12-06 11:28:44.220656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.192 [2024-12-06 11:28:44.233386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.192 [2024-12-06 11:28:44.233992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-12-06 11:28:44.234030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-12-06 11:28:44.234041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.192 [2024-12-06 11:28:44.234283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.192 [2024-12-06 11:28:44.234509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.192 [2024-12-06 11:28:44.234518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.192 [2024-12-06 11:28:44.234526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.192 [2024-12-06 11:28:44.234534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.192 [2024-12-06 11:28:44.247257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.192 [2024-12-06 11:28:44.247846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-12-06 11:28:44.247871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-12-06 11:28:44.247879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.192 [2024-12-06 11:28:44.248101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.192 [2024-12-06 11:28:44.248332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.192 [2024-12-06 11:28:44.248341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.192 [2024-12-06 11:28:44.248348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.192 [2024-12-06 11:28:44.248354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.192 [2024-12-06 11:28:44.261280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.192 [2024-12-06 11:28:44.261821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-12-06 11:28:44.261838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-12-06 11:28:44.261846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.192 [2024-12-06 11:28:44.262073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.192 [2024-12-06 11:28:44.262295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.192 [2024-12-06 11:28:44.262304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.192 [2024-12-06 11:28:44.262312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.192 [2024-12-06 11:28:44.262320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.193 [2024-12-06 11:28:44.275262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.193 [2024-12-06 11:28:44.275779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-12-06 11:28:44.275819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-12-06 11:28:44.275832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.193 [2024-12-06 11:28:44.276083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.193 [2024-12-06 11:28:44.276311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.193 [2024-12-06 11:28:44.276320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.193 [2024-12-06 11:28:44.276328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.193 [2024-12-06 11:28:44.276337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.193 [2024-12-06 11:28:44.289266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.193 [2024-12-06 11:28:44.289953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-12-06 11:28:44.289992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-12-06 11:28:44.290004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.193 [2024-12-06 11:28:44.290247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.193 [2024-12-06 11:28:44.290474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.193 [2024-12-06 11:28:44.290483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.193 [2024-12-06 11:28:44.290495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.193 [2024-12-06 11:28:44.290503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.193 [2024-12-06 11:28:44.303223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.193 [2024-12-06 11:28:44.303767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-12-06 11:28:44.303787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-12-06 11:28:44.303795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.193 [2024-12-06 11:28:44.304023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.193 [2024-12-06 11:28:44.304246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.193 [2024-12-06 11:28:44.304254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.193 [2024-12-06 11:28:44.304261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.193 [2024-12-06 11:28:44.304267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.193 [2024-12-06 11:28:44.317184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.193 [2024-12-06 11:28:44.317721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-12-06 11:28:44.317737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-12-06 11:28:44.317745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.193 [2024-12-06 11:28:44.317971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.193 [2024-12-06 11:28:44.318193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.193 [2024-12-06 11:28:44.318202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.193 [2024-12-06 11:28:44.318209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.193 [2024-12-06 11:28:44.318216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.193 [2024-12-06 11:28:44.331128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.193 [2024-12-06 11:28:44.331700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-12-06 11:28:44.331716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-12-06 11:28:44.331724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.193 [2024-12-06 11:28:44.331950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.193 [2024-12-06 11:28:44.332172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.193 [2024-12-06 11:28:44.332180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.193 [2024-12-06 11:28:44.332187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.193 [2024-12-06 11:28:44.332193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.193 [2024-12-06 11:28:44.345124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.193 [2024-12-06 11:28:44.345649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-12-06 11:28:44.345665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-12-06 11:28:44.345673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.193 [2024-12-06 11:28:44.345901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.193 [2024-12-06 11:28:44.346124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.193 [2024-12-06 11:28:44.346132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.193 [2024-12-06 11:28:44.346139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.193 [2024-12-06 11:28:44.346145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.456 [2024-12-06 11:28:44.359069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.456 [2024-12-06 11:28:44.359586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-12-06 11:28:44.359603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-12-06 11:28:44.359610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.456 [2024-12-06 11:28:44.359832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.456 [2024-12-06 11:28:44.360058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.456 [2024-12-06 11:28:44.360067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.456 [2024-12-06 11:28:44.360074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.456 [2024-12-06 11:28:44.360081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.456 [2024-12-06 11:28:44.372969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.456 [2024-12-06 11:28:44.373511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-12-06 11:28:44.373528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-12-06 11:28:44.373535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.456 [2024-12-06 11:28:44.373758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.456 [2024-12-06 11:28:44.373985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.456 [2024-12-06 11:28:44.373993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.456 [2024-12-06 11:28:44.374000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.456 [2024-12-06 11:28:44.374007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.456 [2024-12-06 11:28:44.386922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.456 [2024-12-06 11:28:44.387460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-12-06 11:28:44.387477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-12-06 11:28:44.387488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.456 [2024-12-06 11:28:44.387710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.456 [2024-12-06 11:28:44.387938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.456 [2024-12-06 11:28:44.387947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.456 [2024-12-06 11:28:44.387955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.456 [2024-12-06 11:28:44.387961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.456 [2024-12-06 11:28:44.400875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.456 [2024-12-06 11:28:44.401404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-12-06 11:28:44.401420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-12-06 11:28:44.401427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.456 [2024-12-06 11:28:44.401648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.456 [2024-12-06 11:28:44.401874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.456 [2024-12-06 11:28:44.401884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.456 [2024-12-06 11:28:44.401891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.456 [2024-12-06 11:28:44.401898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.456 [2024-12-06 11:28:44.414810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.456 [2024-12-06 11:28:44.415381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-12-06 11:28:44.415397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-12-06 11:28:44.415405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.456 [2024-12-06 11:28:44.415626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.456 [2024-12-06 11:28:44.415847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.456 [2024-12-06 11:28:44.415856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.457 [2024-12-06 11:28:44.415868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.457 [2024-12-06 11:28:44.415875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.457 [2024-12-06 11:28:44.428785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.457 [2024-12-06 11:28:44.429321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.457 [2024-12-06 11:28:44.429338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.457 [2024-12-06 11:28:44.429345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.457 [2024-12-06 11:28:44.429567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.457 [2024-12-06 11:28:44.429792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.457 [2024-12-06 11:28:44.429799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.457 [2024-12-06 11:28:44.429806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.457 [2024-12-06 11:28:44.429812] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.457 [2024-12-06 11:28:44.442730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.457 [2024-12-06 11:28:44.443153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.457 [2024-12-06 11:28:44.443170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.457 [2024-12-06 11:28:44.443177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.457 [2024-12-06 11:28:44.443399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.457 [2024-12-06 11:28:44.443620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.457 [2024-12-06 11:28:44.443629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.457 [2024-12-06 11:28:44.443636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.457 [2024-12-06 11:28:44.443642] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.457 [2024-12-06 11:28:44.456564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.457 [2024-12-06 11:28:44.457143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.457 [2024-12-06 11:28:44.457161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.457 [2024-12-06 11:28:44.457168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.457 [2024-12-06 11:28:44.457390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.457 [2024-12-06 11:28:44.457611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.457 [2024-12-06 11:28:44.457619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.457 [2024-12-06 11:28:44.457626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.457 [2024-12-06 11:28:44.457632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.457 [2024-12-06 11:28:44.470550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.457 [2024-12-06 11:28:44.471080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.457 [2024-12-06 11:28:44.471096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.457 [2024-12-06 11:28:44.471103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.457 [2024-12-06 11:28:44.471324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.457 [2024-12-06 11:28:44.471545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.457 [2024-12-06 11:28:44.471554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.457 [2024-12-06 11:28:44.471565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.457 [2024-12-06 11:28:44.471571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.457 [2024-12-06 11:28:44.484514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.457 [2024-12-06 11:28:44.484997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.457 [2024-12-06 11:28:44.485014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.457 [2024-12-06 11:28:44.485021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.457 [2024-12-06 11:28:44.485243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.457 [2024-12-06 11:28:44.485465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.457 [2024-12-06 11:28:44.485474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.457 [2024-12-06 11:28:44.485481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.457 [2024-12-06 11:28:44.485488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3620432 Killed "${NVMF_APP[@]}" "$@" 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3622102 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3622102 00:29:38.457 [2024-12-06 11:28:44.498407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3622102 ']' 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.457 [2024-12-06 11:28:44.498985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.457 [2024-12-06 11:28:44.499002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.457 [2024-12-06 11:28:44.499009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.457 [2024-12-06 11:28:44.499231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.457 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.457 [2024-12-06 11:28:44.499453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.457 [2024-12-06 11:28:44.499461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.457 [2024-12-06 11:28:44.499472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.457 [2024-12-06 11:28:44.499479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.458 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.458 11:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:38.458 [2024-12-06 11:28:44.512407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.458 [2024-12-06 11:28:44.512976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-12-06 11:28:44.512992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.458 [2024-12-06 11:28:44.512999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.458 [2024-12-06 11:28:44.513221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.458 [2024-12-06 11:28:44.513444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.458 [2024-12-06 11:28:44.513453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.458 [2024-12-06 11:28:44.513461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.458 [2024-12-06 11:28:44.513469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.458 [2024-12-06 11:28:44.526386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.458 [2024-12-06 11:28:44.527103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-12-06 11:28:44.527142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.458 [2024-12-06 11:28:44.527155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.458 [2024-12-06 11:28:44.527401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.458 [2024-12-06 11:28:44.527627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.458 [2024-12-06 11:28:44.527636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.458 [2024-12-06 11:28:44.527644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.458 [2024-12-06 11:28:44.527652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.458 [2024-12-06 11:28:44.540385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.458 [2024-12-06 11:28:44.540962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-12-06 11:28:44.540982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.458 [2024-12-06 11:28:44.540990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.458 [2024-12-06 11:28:44.541213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.458 [2024-12-06 11:28:44.541436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.458 [2024-12-06 11:28:44.541444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.458 [2024-12-06 11:28:44.541451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.458 [2024-12-06 11:28:44.541458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.458 [2024-12-06 11:28:44.551642] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:29:38.458 [2024-12-06 11:28:44.551687] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.458 [2024-12-06 11:28:44.554404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.458 [2024-12-06 11:28:44.555196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-12-06 11:28:44.555234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.458 [2024-12-06 11:28:44.555246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.458 [2024-12-06 11:28:44.555488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.458 [2024-12-06 11:28:44.555715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.458 [2024-12-06 11:28:44.555725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.458 [2024-12-06 11:28:44.555734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.458 [2024-12-06 11:28:44.555742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.458 [2024-12-06 11:28:44.568257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.458 [2024-12-06 11:28:44.568810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-12-06 11:28:44.568829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.458 [2024-12-06 11:28:44.568837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.458 [2024-12-06 11:28:44.569065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.458 [2024-12-06 11:28:44.569289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.458 [2024-12-06 11:28:44.569296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.458 [2024-12-06 11:28:44.569304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.458 [2024-12-06 11:28:44.569311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.458 [2024-12-06 11:28:44.582324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.458 [2024-12-06 11:28:44.582882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-12-06 11:28:44.582920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.458 [2024-12-06 11:28:44.582933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.458 [2024-12-06 11:28:44.583177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.458 [2024-12-06 11:28:44.583403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.458 [2024-12-06 11:28:44.583413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.458 [2024-12-06 11:28:44.583420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.458 [2024-12-06 11:28:44.583433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.458 [2024-12-06 11:28:44.596373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.458 [2024-12-06 11:28:44.596943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-12-06 11:28:44.596963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.458 [2024-12-06 11:28:44.596972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.458 [2024-12-06 11:28:44.597195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.458 [2024-12-06 11:28:44.597417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.458 [2024-12-06 11:28:44.597426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.458 [2024-12-06 11:28:44.597433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.458 [2024-12-06 11:28:44.597441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.458 [2024-12-06 11:28:44.610363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.458 [2024-12-06 11:28:44.611076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.458 [2024-12-06 11:28:44.611114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.458 [2024-12-06 11:28:44.611125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.458 [2024-12-06 11:28:44.611368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.459 [2024-12-06 11:28:44.611594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.459 [2024-12-06 11:28:44.611603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.459 [2024-12-06 11:28:44.611611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.459 [2024-12-06 11:28:44.611619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.720 [2024-12-06 11:28:44.624340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.720 [2024-12-06 11:28:44.624819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.720 [2024-12-06 11:28:44.624838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.720 [2024-12-06 11:28:44.624846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.720 [2024-12-06 11:28:44.625076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.720 [2024-12-06 11:28:44.625298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.720 [2024-12-06 11:28:44.625306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.720 [2024-12-06 11:28:44.625314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.720 [2024-12-06 11:28:44.625321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.720 [2024-12-06 11:28:44.638283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.720 [2024-12-06 11:28:44.638950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.720 [2024-12-06 11:28:44.638993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.720 [2024-12-06 11:28:44.639006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.720 [2024-12-06 11:28:44.639249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.720 [2024-12-06 11:28:44.639476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.720 [2024-12-06 11:28:44.639485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.720 [2024-12-06 11:28:44.639492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.720 [2024-12-06 11:28:44.639500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.720 [2024-12-06 11:28:44.649052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:38.720 [2024-12-06 11:28:44.652229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.720 [2024-12-06 11:28:44.652675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.720 [2024-12-06 11:28:44.652694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.720 [2024-12-06 11:28:44.652702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.720 [2024-12-06 11:28:44.652944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.720 [2024-12-06 11:28:44.653168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.720 [2024-12-06 11:28:44.653177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.720 [2024-12-06 11:28:44.653184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.721 [2024-12-06 11:28:44.653191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.721 [2024-12-06 11:28:44.666120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.721 [2024-12-06 11:28:44.666756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.721 [2024-12-06 11:28:44.666796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.721 [2024-12-06 11:28:44.666807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.721 [2024-12-06 11:28:44.667059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.721 [2024-12-06 11:28:44.667287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.721 [2024-12-06 11:28:44.667296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.721 [2024-12-06 11:28:44.667304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.721 [2024-12-06 11:28:44.667312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.721 [2024-12-06 11:28:44.678107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.721 [2024-12-06 11:28:44.678128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.721 [2024-12-06 11:28:44.678135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.721 [2024-12-06 11:28:44.678144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.721 [2024-12-06 11:28:44.678148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.721 [2024-12-06 11:28:44.679202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.721 [2024-12-06 11:28:44.679355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.721 [2024-12-06 11:28:44.679357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.721 [2024-12-06 11:28:44.680035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.721 [2024-12-06 11:28:44.680750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.721 [2024-12-06 11:28:44.680788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.721 [2024-12-06 11:28:44.680800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.721 [2024-12-06 11:28:44.681052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.721 [2024-12-06 11:28:44.681280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.721 [2024-12-06 11:28:44.681289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.721 [2024-12-06 11:28:44.681297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.721 [2024-12-06 11:28:44.681305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.721 [2024-12-06 11:28:44.694082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.721 [2024-12-06 11:28:44.694564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.721 [2024-12-06 11:28:44.694584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.721 [2024-12-06 11:28:44.694593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.721 [2024-12-06 11:28:44.694817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.721 [2024-12-06 11:28:44.695046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.721 [2024-12-06 11:28:44.695055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.721 [2024-12-06 11:28:44.695063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.721 [2024-12-06 11:28:44.695069] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.721 [2024-12-06 11:28:44.708134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.721 [2024-12-06 11:28:44.708635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.721 [2024-12-06 11:28:44.708675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.721 [2024-12-06 11:28:44.708686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.721 [2024-12-06 11:28:44.708940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.721 [2024-12-06 11:28:44.709168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.721 [2024-12-06 11:28:44.709177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.721 [2024-12-06 11:28:44.709186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.721 [2024-12-06 11:28:44.709199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.721 [2024-12-06 11:28:44.722134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.721 [2024-12-06 11:28:44.722838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.721 [2024-12-06 11:28:44.722884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.721 [2024-12-06 11:28:44.722896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.721 [2024-12-06 11:28:44.723140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.721 [2024-12-06 11:28:44.723367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.721 [2024-12-06 11:28:44.723376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.721 [2024-12-06 11:28:44.723385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.721 [2024-12-06 11:28:44.723393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.721 [2024-12-06 11:28:44.736120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.721 [2024-12-06 11:28:44.736816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.721 [2024-12-06 11:28:44.736853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.721 [2024-12-06 11:28:44.736873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.721 [2024-12-06 11:28:44.737118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.721 [2024-12-06 11:28:44.737345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.721 [2024-12-06 11:28:44.737354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.721 [2024-12-06 11:28:44.737362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.721 [2024-12-06 11:28:44.737370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.721 [2024-12-06 11:28:44.750085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.721 [2024-12-06 11:28:44.750783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.721 [2024-12-06 11:28:44.750821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.721 [2024-12-06 11:28:44.750832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.721 [2024-12-06 11:28:44.751084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.722 [2024-12-06 11:28:44.751311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.722 [2024-12-06 11:28:44.751320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.722 [2024-12-06 11:28:44.751328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.722 [2024-12-06 11:28:44.751336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.722 [2024-12-06 11:28:44.764065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.722 [2024-12-06 11:28:44.764738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.722 [2024-12-06 11:28:44.764777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.722 [2024-12-06 11:28:44.764787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.722 [2024-12-06 11:28:44.765037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.722 [2024-12-06 11:28:44.765266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.722 [2024-12-06 11:28:44.765275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.722 [2024-12-06 11:28:44.765284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.722 [2024-12-06 11:28:44.765293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.722 [2024-12-06 11:28:44.778013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.722 [2024-12-06 11:28:44.778567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.722 [2024-12-06 11:28:44.778606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.722 [2024-12-06 11:28:44.778616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.722 [2024-12-06 11:28:44.778859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.722 [2024-12-06 11:28:44.779094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.722 [2024-12-06 11:28:44.779105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.722 [2024-12-06 11:28:44.779113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.722 [2024-12-06 11:28:44.779121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.722 [2024-12-06 11:28:44.792046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.722 [2024-12-06 11:28:44.792669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.722 [2024-12-06 11:28:44.792706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.722 [2024-12-06 11:28:44.792717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.722 [2024-12-06 11:28:44.792972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.722 [2024-12-06 11:28:44.793199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.722 [2024-12-06 11:28:44.793208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.722 [2024-12-06 11:28:44.793216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.722 [2024-12-06 11:28:44.793224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.722 [2024-12-06 11:28:44.805940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.722 [2024-12-06 11:28:44.806504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.722 [2024-12-06 11:28:44.806523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.722 [2024-12-06 11:28:44.806531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.722 [2024-12-06 11:28:44.806758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.722 [2024-12-06 11:28:44.806988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.722 [2024-12-06 11:28:44.806997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.722 [2024-12-06 11:28:44.807004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.722 [2024-12-06 11:28:44.807011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.722 [2024-12-06 11:28:44.819927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.722 [2024-12-06 11:28:44.820464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.722 [2024-12-06 11:28:44.820502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.722 [2024-12-06 11:28:44.820513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.722 [2024-12-06 11:28:44.820755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.722 [2024-12-06 11:28:44.820988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.722 [2024-12-06 11:28:44.820998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.722 [2024-12-06 11:28:44.821006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.722 [2024-12-06 11:28:44.821013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.722 [2024-12-06 11:28:44.833954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.722 [2024-12-06 11:28:44.834645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.722 [2024-12-06 11:28:44.834684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.722 [2024-12-06 11:28:44.834695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.722 [2024-12-06 11:28:44.834944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.722 [2024-12-06 11:28:44.835172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.722 [2024-12-06 11:28:44.835181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.722 [2024-12-06 11:28:44.835189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.722 [2024-12-06 11:28:44.835196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.722 4650.67 IOPS, 18.17 MiB/s [2024-12-06T10:28:44.889Z] [2024-12-06 11:28:44.849572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.722 [2024-12-06 11:28:44.850236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.722 [2024-12-06 11:28:44.850275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.722 [2024-12-06 11:28:44.850287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.722 [2024-12-06 11:28:44.850531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.722 [2024-12-06 11:28:44.850757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.722 [2024-12-06 11:28:44.850771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.722 [2024-12-06 11:28:44.850779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.723 [2024-12-06 11:28:44.850787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.723 [2024-12-06 11:28:44.863532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.723 [2024-12-06 11:28:44.864000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.723 [2024-12-06 11:28:44.864039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.723 [2024-12-06 11:28:44.864051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.723 [2024-12-06 11:28:44.864297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.723 [2024-12-06 11:28:44.864524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.723 [2024-12-06 11:28:44.864533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.723 [2024-12-06 11:28:44.864541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.723 [2024-12-06 11:28:44.864549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.723 [2024-12-06 11:28:44.877481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.723 [2024-12-06 11:28:44.878201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.723 [2024-12-06 11:28:44.878239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.723 [2024-12-06 11:28:44.878250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.723 [2024-12-06 11:28:44.878492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.723 [2024-12-06 11:28:44.878719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.723 [2024-12-06 11:28:44.878728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.723 [2024-12-06 11:28:44.878735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.723 [2024-12-06 11:28:44.878743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.985 [2024-12-06 11:28:44.891464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.985 [2024-12-06 11:28:44.892156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.985 [2024-12-06 11:28:44.892195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.985 [2024-12-06 11:28:44.892206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.985 [2024-12-06 11:28:44.892449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.985 [2024-12-06 11:28:44.892676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.985 [2024-12-06 11:28:44.892685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.985 [2024-12-06 11:28:44.892693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.985 [2024-12-06 11:28:44.892705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.985 [2024-12-06 11:28:44.905477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.985 [2024-12-06 11:28:44.906209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.985 [2024-12-06 11:28:44.906247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.985 [2024-12-06 11:28:44.906258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.985 [2024-12-06 11:28:44.906501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.985 [2024-12-06 11:28:44.906727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.985 [2024-12-06 11:28:44.906736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.985 [2024-12-06 11:28:44.906744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.985 [2024-12-06 11:28:44.906752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.985 [2024-12-06 11:28:44.919472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.985 [2024-12-06 11:28:44.920070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.985 [2024-12-06 11:28:44.920090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.985 [2024-12-06 11:28:44.920098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.985 [2024-12-06 11:28:44.920321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.985 [2024-12-06 11:28:44.920542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.985 [2024-12-06 11:28:44.920551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.985 [2024-12-06 11:28:44.920558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.985 [2024-12-06 11:28:44.920564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.985 [2024-12-06 11:28:44.933495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.985 [2024-12-06 11:28:44.934057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.985 [2024-12-06 11:28:44.934074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.985 [2024-12-06 11:28:44.934081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.985 [2024-12-06 11:28:44.934303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.985 [2024-12-06 11:28:44.934525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.985 [2024-12-06 11:28:44.934533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.985 [2024-12-06 11:28:44.934540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.985 [2024-12-06 11:28:44.934547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.985 [2024-12-06 11:28:44.947462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.985 [2024-12-06 11:28:44.948157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.985 [2024-12-06 11:28:44.948204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.985 [2024-12-06 11:28:44.948215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.986 [2024-12-06 11:28:44.948457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.986 [2024-12-06 11:28:44.948684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.986 [2024-12-06 11:28:44.948693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.986 [2024-12-06 11:28:44.948701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.986 [2024-12-06 11:28:44.948709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.986 [2024-12-06 11:28:44.961442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.986 [2024-12-06 11:28:44.962151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.986 [2024-12-06 11:28:44.962189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.986 [2024-12-06 11:28:44.962200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.986 [2024-12-06 11:28:44.962442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.986 [2024-12-06 11:28:44.962669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.986 [2024-12-06 11:28:44.962678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.986 [2024-12-06 11:28:44.962685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.986 [2024-12-06 11:28:44.962693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.986 [2024-12-06 11:28:44.975411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.986 [2024-12-06 11:28:44.976033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.986 [2024-12-06 11:28:44.976072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.986 [2024-12-06 11:28:44.976083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.986 [2024-12-06 11:28:44.976325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.986 [2024-12-06 11:28:44.976551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.986 [2024-12-06 11:28:44.976559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.986 [2024-12-06 11:28:44.976567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.986 [2024-12-06 11:28:44.976575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.986 [2024-12-06 11:28:44.989292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.986 [2024-12-06 11:28:44.989978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.986 [2024-12-06 11:28:44.990016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.986 [2024-12-06 11:28:44.990027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.986 [2024-12-06 11:28:44.990274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.986 [2024-12-06 11:28:44.990501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.986 [2024-12-06 11:28:44.990510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.986 [2024-12-06 11:28:44.990518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.986 [2024-12-06 11:28:44.990525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.986 [2024-12-06 11:28:45.003248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.986 [2024-12-06 11:28:45.003898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.986 [2024-12-06 11:28:45.003936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.986 [2024-12-06 11:28:45.003949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.986 [2024-12-06 11:28:45.004192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.986 [2024-12-06 11:28:45.004419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.986 [2024-12-06 11:28:45.004428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.986 [2024-12-06 11:28:45.004436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.986 [2024-12-06 11:28:45.004444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.986 [2024-12-06 11:28:45.017159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.986 [2024-12-06 11:28:45.017719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.986 [2024-12-06 11:28:45.017757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.986 [2024-12-06 11:28:45.017768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.986 [2024-12-06 11:28:45.018018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.986 [2024-12-06 11:28:45.018244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.986 [2024-12-06 11:28:45.018254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.986 [2024-12-06 11:28:45.018262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.986 [2024-12-06 11:28:45.018270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.986 [2024-12-06 11:28:45.031194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.986 [2024-12-06 11:28:45.031890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.986 [2024-12-06 11:28:45.031929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.986 [2024-12-06 11:28:45.031941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.986 [2024-12-06 11:28:45.032186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.986 [2024-12-06 11:28:45.032413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.986 [2024-12-06 11:28:45.032427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.986 [2024-12-06 11:28:45.032436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.986 [2024-12-06 11:28:45.032444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.986 [2024-12-06 11:28:45.045176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.986 [2024-12-06 11:28:45.045876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.986 [2024-12-06 11:28:45.045914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.986 [2024-12-06 11:28:45.045926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.986 [2024-12-06 11:28:45.046171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.986 [2024-12-06 11:28:45.046397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.986 [2024-12-06 11:28:45.046406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.986 [2024-12-06 11:28:45.046413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.986 [2024-12-06 11:28:45.046421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.986 [2024-12-06 11:28:45.059143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.987 [2024-12-06 11:28:45.059835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.987 [2024-12-06 11:28:45.059882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.987 [2024-12-06 11:28:45.059894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.987 [2024-12-06 11:28:45.060136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.987 [2024-12-06 11:28:45.060362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.987 [2024-12-06 11:28:45.060372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.987 [2024-12-06 11:28:45.060380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.987 [2024-12-06 11:28:45.060388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.987 [2024-12-06 11:28:45.073098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.987 [2024-12-06 11:28:45.073682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.987 [2024-12-06 11:28:45.073702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.987 [2024-12-06 11:28:45.073709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.987 [2024-12-06 11:28:45.073938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.987 [2024-12-06 11:28:45.074160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.987 [2024-12-06 11:28:45.074169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.987 [2024-12-06 11:28:45.074176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.987 [2024-12-06 11:28:45.074183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.987 [2024-12-06 11:28:45.087099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.987 [2024-12-06 11:28:45.087538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.987 [2024-12-06 11:28:45.087554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.987 [2024-12-06 11:28:45.087562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.987 [2024-12-06 11:28:45.087783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.987 [2024-12-06 11:28:45.088010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.987 [2024-12-06 11:28:45.088019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.987 [2024-12-06 11:28:45.088026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.987 [2024-12-06 11:28:45.088033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.987 [2024-12-06 11:28:45.100946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.987 [2024-12-06 11:28:45.101449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.987 [2024-12-06 11:28:45.101465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.987 [2024-12-06 11:28:45.101473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.987 [2024-12-06 11:28:45.101694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.987 [2024-12-06 11:28:45.101920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.987 [2024-12-06 11:28:45.101937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.987 [2024-12-06 11:28:45.101944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.987 [2024-12-06 11:28:45.101951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.987 [2024-12-06 11:28:45.114901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.987 [2024-12-06 11:28:45.115568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.987 [2024-12-06 11:28:45.115607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.987 [2024-12-06 11:28:45.115618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.987 [2024-12-06 11:28:45.115860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.987 [2024-12-06 11:28:45.116095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.987 [2024-12-06 11:28:45.116104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.987 [2024-12-06 11:28:45.116112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.987 [2024-12-06 11:28:45.116120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.987 [2024-12-06 11:28:45.128829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.987 [2024-12-06 11:28:45.129383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.987 [2024-12-06 11:28:45.129407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.987 [2024-12-06 11:28:45.129416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.987 [2024-12-06 11:28:45.129639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.987 [2024-12-06 11:28:45.129860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.987 [2024-12-06 11:28:45.129877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.987 [2024-12-06 11:28:45.129884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.987 [2024-12-06 11:28:45.129891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:38.987 [2024-12-06 11:28:45.142808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:38.987 [2024-12-06 11:28:45.143378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.987 [2024-12-06 11:28:45.143395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:38.987 [2024-12-06 11:28:45.143403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:38.987 [2024-12-06 11:28:45.143625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:38.987 [2024-12-06 11:28:45.143846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:38.987 [2024-12-06 11:28:45.143854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:38.987 [2024-12-06 11:28:45.143861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:38.987 [2024-12-06 11:28:45.143872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.249 [2024-12-06 11:28:45.156657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.249 [2024-12-06 11:28:45.157216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 11:28:45.157233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.249 [2024-12-06 11:28:45.157240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.249 [2024-12-06 11:28:45.157462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.249 [2024-12-06 11:28:45.157685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.249 [2024-12-06 11:28:45.157692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.249 [2024-12-06 11:28:45.157699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.249 [2024-12-06 11:28:45.157706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.249 [2024-12-06 11:28:45.170618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.249 [2024-12-06 11:28:45.171205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 11:28:45.171244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.249 [2024-12-06 11:28:45.171256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.249 [2024-12-06 11:28:45.171503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.249 [2024-12-06 11:28:45.171730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.249 [2024-12-06 11:28:45.171739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.249 [2024-12-06 11:28:45.171746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.249 [2024-12-06 11:28:45.171754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.249 [2024-12-06 11:28:45.184473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.249 [2024-12-06 11:28:45.185147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 11:28:45.185184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.249 [2024-12-06 11:28:45.185196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.249 [2024-12-06 11:28:45.185438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.249 [2024-12-06 11:28:45.185665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.249 [2024-12-06 11:28:45.185674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.249 [2024-12-06 11:28:45.185682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.249 [2024-12-06 11:28:45.185689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.249 [2024-12-06 11:28:45.198414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.249 [2024-12-06 11:28:45.199170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 11:28:45.199208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.249 [2024-12-06 11:28:45.199220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.249 [2024-12-06 11:28:45.199462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.249 [2024-12-06 11:28:45.199688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.249 [2024-12-06 11:28:45.199697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.249 [2024-12-06 11:28:45.199705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.249 [2024-12-06 11:28:45.199713] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.249 [2024-12-06 11:28:45.212426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.249 [2024-12-06 11:28:45.213110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 11:28:45.213148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.249 [2024-12-06 11:28:45.213159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.249 [2024-12-06 11:28:45.213401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.249 [2024-12-06 11:28:45.213627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.249 [2024-12-06 11:28:45.213636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.249 [2024-12-06 11:28:45.213648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.249 [2024-12-06 11:28:45.213656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.249 [2024-12-06 11:28:45.226377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.249 [2024-12-06 11:28:45.226949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 11:28:45.226968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.250 [2024-12-06 11:28:45.226977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.250 [2024-12-06 11:28:45.227200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.250 [2024-12-06 11:28:45.227422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.250 [2024-12-06 11:28:45.227430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.250 [2024-12-06 11:28:45.227437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.250 [2024-12-06 11:28:45.227443] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.250 [2024-12-06 11:28:45.240377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.250 [2024-12-06 11:28:45.240985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 11:28:45.241024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.250 [2024-12-06 11:28:45.241035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.250 [2024-12-06 11:28:45.241277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.250 [2024-12-06 11:28:45.241503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.250 [2024-12-06 11:28:45.241512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.250 [2024-12-06 11:28:45.241520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.250 [2024-12-06 11:28:45.241528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.250 [2024-12-06 11:28:45.254252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.250 [2024-12-06 11:28:45.254927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 11:28:45.254966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.250 [2024-12-06 11:28:45.254978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.250 [2024-12-06 11:28:45.255222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.250 [2024-12-06 11:28:45.255449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.250 [2024-12-06 11:28:45.255458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.250 [2024-12-06 11:28:45.255466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.250 [2024-12-06 11:28:45.255474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.250 [2024-12-06 11:28:45.268205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.250 [2024-12-06 11:28:45.268856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 11:28:45.268901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.250 [2024-12-06 11:28:45.268912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.250 [2024-12-06 11:28:45.269155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.250 [2024-12-06 11:28:45.269381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.250 [2024-12-06 11:28:45.269391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.250 [2024-12-06 11:28:45.269398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.250 [2024-12-06 11:28:45.269407] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.250 [2024-12-06 11:28:45.282122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.250 [2024-12-06 11:28:45.282658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 11:28:45.282697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.250 [2024-12-06 11:28:45.282708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.250 [2024-12-06 11:28:45.282960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.250 [2024-12-06 11:28:45.283188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.250 [2024-12-06 11:28:45.283198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.250 [2024-12-06 11:28:45.283207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.250 [2024-12-06 11:28:45.283215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.250 [2024-12-06 11:28:45.296146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.250 [2024-12-06 11:28:45.296715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 11:28:45.296734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.250 [2024-12-06 11:28:45.296742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.250 [2024-12-06 11:28:45.296970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.250 [2024-12-06 11:28:45.297193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.250 [2024-12-06 11:28:45.297202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.250 [2024-12-06 11:28:45.297209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.250 [2024-12-06 11:28:45.297216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.250 [2024-12-06 11:28:45.310127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.250 [2024-12-06 11:28:45.310715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 11:28:45.310736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.250 [2024-12-06 11:28:45.310744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.250 [2024-12-06 11:28:45.310972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.250 [2024-12-06 11:28:45.311195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.250 [2024-12-06 11:28:45.311204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.250 [2024-12-06 11:28:45.311211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.250 [2024-12-06 11:28:45.311217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.250 [2024-12-06 11:28:45.324156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.250 [2024-12-06 11:28:45.324614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 11:28:45.324632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.250 [2024-12-06 11:28:45.324640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.250 [2024-12-06 11:28:45.324867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.250 [2024-12-06 11:28:45.325090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.250 [2024-12-06 11:28:45.325099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.250 [2024-12-06 11:28:45.325106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.250 [2024-12-06 11:28:45.325113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.250 [2024-12-06 11:28:45.338060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.250 [2024-12-06 11:28:45.338599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 11:28:45.338637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.250 [2024-12-06 11:28:45.338649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.250 [2024-12-06 11:28:45.338899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.250 [2024-12-06 11:28:45.339126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.250 [2024-12-06 11:28:45.339135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.250 [2024-12-06 11:28:45.339143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.250 [2024-12-06 11:28:45.339151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.250 [2024-12-06 11:28:45.352071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.250 [2024-12-06 11:28:45.352678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 11:28:45.352698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.250 [2024-12-06 11:28:45.352706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.250 [2024-12-06 11:28:45.352935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.250 [2024-12-06 11:28:45.353164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.250 [2024-12-06 11:28:45.353172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.250 [2024-12-06 11:28:45.353179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.250 [2024-12-06 11:28:45.353186] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.250 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:39.251 [2024-12-06 11:28:45.366120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.251 [2024-12-06 11:28:45.366670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 11:28:45.366689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.251 [2024-12-06 11:28:45.366698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.251 [2024-12-06 11:28:45.366927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.251 [2024-12-06 11:28:45.367152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.251 [2024-12-06 11:28:45.367161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.251 [2024-12-06 11:28:45.367170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.251 [2024-12-06 11:28:45.367176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.251 [2024-12-06 11:28:45.380091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.251 [2024-12-06 11:28:45.380509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 11:28:45.380525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.251 [2024-12-06 11:28:45.380533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.251 [2024-12-06 11:28:45.380755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.251 [2024-12-06 11:28:45.380983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.251 [2024-12-06 11:28:45.380992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.251 [2024-12-06 11:28:45.380999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.251 [2024-12-06 11:28:45.381005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.251 [2024-12-06 11:28:45.394128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.251 [2024-12-06 11:28:45.394558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 11:28:45.394575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.251 [2024-12-06 11:28:45.394586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:39.251 [2024-12-06 11:28:45.394808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.251 [2024-12-06 11:28:45.395035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.251 [2024-12-06 11:28:45.395046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.251 [2024-12-06 11:28:45.395053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.251 [2024-12-06 11:28:45.395060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:39.251 [2024-12-06 11:28:45.400553] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.251 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:39.251 [2024-12-06 11:28:45.407966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.251 [2024-12-06 11:28:45.408555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 11:28:45.408571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.251 [2024-12-06 11:28:45.408578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.251 [2024-12-06 11:28:45.408799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.251 [2024-12-06 11:28:45.409025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.251 [2024-12-06 11:28:45.409034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.251 [2024-12-06 11:28:45.409041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.251 [2024-12-06 11:28:45.409047] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.511 [2024-12-06 11:28:45.421956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.511 [2024-12-06 11:28:45.422600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.511 [2024-12-06 11:28:45.422639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.511 [2024-12-06 11:28:45.422649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.511 [2024-12-06 11:28:45.422901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.511 [2024-12-06 11:28:45.423128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.511 [2024-12-06 11:28:45.423137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.511 [2024-12-06 11:28:45.423145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.511 [2024-12-06 11:28:45.423158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.511 Malloc0 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:39.511 [2024-12-06 11:28:45.435877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.511 [2024-12-06 11:28:45.436563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.511 [2024-12-06 11:28:45.436602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.511 [2024-12-06 11:28:45.436613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.511 [2024-12-06 11:28:45.436855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.511 [2024-12-06 11:28:45.437090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.511 [2024-12-06 11:28:45.437100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.511 [2024-12-06 11:28:45.437108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.511 [2024-12-06 11:28:45.437116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:39.511 [2024-12-06 11:28:45.449829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.511 [2024-12-06 11:28:45.450526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.511 [2024-12-06 11:28:45.450565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.511 [2024-12-06 11:28:45.450576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.511 [2024-12-06 11:28:45.450818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.511 [2024-12-06 11:28:45.451053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.511 [2024-12-06 11:28:45.451063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.511 [2024-12-06 11:28:45.451072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.511 [2024-12-06 11:28:45.451080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.511 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:39.511 [2024-12-06 11:28:45.463796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.511 [2024-12-06 11:28:45.464457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.511 [2024-12-06 11:28:45.464496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6c780 with addr=10.0.0.2, port=4420 00:29:39.511 [2024-12-06 11:28:45.464507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6c780 is same with the state(6) to be set 00:29:39.512 [2024-12-06 11:28:45.464748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6c780 (9): Bad file descriptor 00:29:39.512 [2024-12-06 11:28:45.464982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:39.512 [2024-12-06 11:28:45.464992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:39.512 [2024-12-06 11:28:45.465001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:39.512 [2024-12-06 11:28:45.465009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:39.512 [2024-12-06 11:28:45.465723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.512 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.512 11:28:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3620810 00:29:39.512 [2024-12-06 11:28:45.477716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:39.512 [2024-12-06 11:28:45.515121] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:40.711 4524.43 IOPS, 17.67 MiB/s [2024-12-06T10:28:48.262Z] 5345.12 IOPS, 20.88 MiB/s [2024-12-06T10:28:49.203Z] 5999.11 IOPS, 23.43 MiB/s [2024-12-06T10:28:50.141Z] 6512.00 IOPS, 25.44 MiB/s [2024-12-06T10:28:51.081Z] 6959.45 IOPS, 27.19 MiB/s [2024-12-06T10:28:52.022Z] 7309.58 IOPS, 28.55 MiB/s [2024-12-06T10:28:52.965Z] 7594.31 IOPS, 29.67 MiB/s [2024-12-06T10:28:53.908Z] 7858.29 IOPS, 30.70 MiB/s [2024-12-06T10:28:53.908Z] 8086.53 IOPS, 31.59 MiB/s 00:29:47.741 Latency(us) 00:29:47.741 [2024-12-06T10:28:53.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.741 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:47.741 Verification LBA range: start 0x0 length 0x4000 00:29:47.741 Nvme1n1 : 15.01 8086.86 31.59 9723.02 0.00 7161.20 802.13 15947.09 00:29:47.741 [2024-12-06T10:28:53.908Z] =================================================================================================================== 00:29:47.741 [2024-12-06T10:28:53.908Z] Total : 8086.86 31.59 9723.02 0.00 7161.20 802.13 15947.09 00:29:48.002 11:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:48.002 11:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.002 11:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.002 11:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.002 rmmod nvme_tcp 00:29:48.002 rmmod nvme_fabrics 00:29:48.002 rmmod nvme_keyring 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3622102 ']' 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3622102 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3622102 ']' 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3622102 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3622102 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3622102' 00:29:48.002 killing process with pid 3622102 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3622102 00:29:48.002 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3622102 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.264 11:28:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.175 11:28:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.175 00:29:50.175 real 0m29.052s 00:29:50.175 user 1m3.471s 00:29:50.175 sys 0m8.195s 00:29:50.175 11:28:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.175 11:28:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.175 ************************************ 00:29:50.175 END TEST nvmf_bdevperf 00:29:50.175 ************************************ 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.436 ************************************ 00:29:50.436 START TEST nvmf_target_disconnect 00:29:50.436 ************************************ 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:50.436 * Looking for test storage... 00:29:50.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.436 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.437 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.437 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.437 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.437 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:50.437 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:50.437 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.437 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.437 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:50.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.699 --rc genhtml_branch_coverage=1 00:29:50.699 --rc genhtml_function_coverage=1 00:29:50.699 --rc genhtml_legend=1 00:29:50.699 --rc geninfo_all_blocks=1 00:29:50.699 --rc geninfo_unexecuted_blocks=1 00:29:50.699 00:29:50.699 ' 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:50.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.699 --rc genhtml_branch_coverage=1 00:29:50.699 --rc genhtml_function_coverage=1 00:29:50.699 --rc genhtml_legend=1 00:29:50.699 --rc geninfo_all_blocks=1 00:29:50.699 --rc geninfo_unexecuted_blocks=1 00:29:50.699 00:29:50.699 ' 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:50.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.699 --rc genhtml_branch_coverage=1 00:29:50.699 --rc genhtml_function_coverage=1 00:29:50.699 --rc genhtml_legend=1 00:29:50.699 --rc geninfo_all_blocks=1 00:29:50.699 --rc geninfo_unexecuted_blocks=1 00:29:50.699 00:29:50.699 ' 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:50.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.699 --rc genhtml_branch_coverage=1 00:29:50.699 --rc genhtml_function_coverage=1 00:29:50.699 --rc genhtml_legend=1 00:29:50.699 --rc geninfo_all_blocks=1 00:29:50.699 --rc geninfo_unexecuted_blocks=1 00:29:50.699 00:29:50.699 ' 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.699 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.700 11:28:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:58.842 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:58.842 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.842 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:58.843 Found net devices under 0000:31:00.0: cvl_0_0 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:58.843 Found net devices under 0000:31:00.1: cvl_0_1 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:58.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:29:58.843 00:29:58.843 --- 10.0.0.2 ping statistics --- 00:29:58.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.843 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:29:58.843 00:29:58.843 --- 10.0.0.1 ping statistics --- 00:29:58.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.843 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:58.843 ************************************ 00:29:58.843 START TEST nvmf_target_disconnect_tc1 00:29:58.843 ************************************ 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:58.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.118 [2024-12-06 11:29:05.049187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.118 [2024-12-06 11:29:05.049247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x667d00 with addr=10.0.0.2, port=4420 00:29:59.118 [2024-12-06 11:29:05.049278] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:59.118 [2024-12-06 11:29:05.049292] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:59.118 [2024-12-06 11:29:05.049299] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:59.118 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:59.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:59.118 Initializing NVMe Controllers 00:29:59.118 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:59.118 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:59.118 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:59.118 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:59.118 00:29:59.118 real 0m0.124s 00:29:59.118 user 0m0.064s 00:29:59.118 sys 0m0.059s 00:29:59.118 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.119 ************************************ 00:29:59.119 END TEST nvmf_target_disconnect_tc1 00:29:59.119 ************************************ 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:59.119 ************************************ 00:29:59.119 START TEST nvmf_target_disconnect_tc2 00:29:59.119 ************************************ 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3628542 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3628542 00:29:59.119 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:59.120 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3628542 ']' 00:29:59.120 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.120 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.120 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.120 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.120 11:29:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.120 [2024-12-06 11:29:05.207462] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:29:59.120 [2024-12-06 11:29:05.207513] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.381 [2024-12-06 11:29:05.313997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.381 [2024-12-06 11:29:05.366433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.381 [2024-12-06 11:29:05.366489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.381 [2024-12-06 11:29:05.366497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.381 [2024-12-06 11:29:05.366505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.381 [2024-12-06 11:29:05.366511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.381 [2024-12-06 11:29:05.368818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:59.381 [2024-12-06 11:29:05.368980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:59.381 [2024-12-06 11:29:05.369112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:59.381 [2024-12-06 11:29:05.369112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.950 Malloc0 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.950 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:59.950 [2024-12-06 11:29:06.113806] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.210 [2024-12-06 11:29:06.142150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3628892 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:00.210 11:29:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:02.121 11:29:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3628542 00:30:02.121 11:29:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 [2024-12-06 11:29:08.169990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 [2024-12-06 11:29:08.170268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Write completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 Read completed with error (sct=0, sc=8) 00:30:02.121 starting I/O failed 00:30:02.121 [2024-12-06 11:29:08.170460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.121 [2024-12-06 11:29:08.170776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.121 [2024-12-06 11:29:08.170792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.121 qpair failed and we were unable to recover it. 00:30:02.121 [2024-12-06 11:29:08.171041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.121 [2024-12-06 11:29:08.171052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.121 qpair failed and we were unable to recover it. 00:30:02.121 [2024-12-06 11:29:08.171347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.121 [2024-12-06 11:29:08.171358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.121 qpair failed and we were unable to recover it. 00:30:02.121 [2024-12-06 11:29:08.171637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.121 [2024-12-06 11:29:08.171646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.121 qpair failed and we were unable to recover it. 00:30:02.121 [2024-12-06 11:29:08.171934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.121 [2024-12-06 11:29:08.171943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.121 qpair failed and we were unable to recover it. 00:30:02.121 [2024-12-06 11:29:08.172165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.121 [2024-12-06 11:29:08.172173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.172458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.172467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.172647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.172656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.172934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.172944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.173277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.173286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.173599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.173609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.173901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.173910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.174261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.174271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.174570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.174579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.174922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.174931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.175241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.175251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.175567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.175577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.175764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.175774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.176216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.176226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.176480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.176489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.176812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.176822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.177028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.177037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.177362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.177372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.177679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.177689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.178006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.178016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.178305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.178316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.178639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.178648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.178979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.178989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.179299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.179309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.179461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.179470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.179655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.179665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.179920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.179931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.180298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.180308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.180605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.180614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.180917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.180926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.181263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.181272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.181457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.181468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.181793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.181802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.182010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.182020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.182405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.182414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.182711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.182721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.182954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.182963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.183268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.183278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.183452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.183462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.183609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.183618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.183895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.183905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.184242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.184251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.184440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.184450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.184651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.184661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.184973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.184983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.185315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.185324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.185603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.185613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.185929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.185938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.186043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.186050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.186376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.186384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.186586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.186595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.186834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.186841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.187150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.187159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.187521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.187529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.187826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.187835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.188146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.188154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.188443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.188452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.188736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.188745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.188937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.188945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.189254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.189263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.189554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.189564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.189858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.189869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.190188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.122 [2024-12-06 11:29:08.190197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.122 qpair failed and we were unable to recover it. 00:30:02.122 [2024-12-06 11:29:08.190504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.190513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.190800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.190808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.191083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.191091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.191384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.191392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.191728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.191737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.192045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.192055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.192341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.192349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.192652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.192661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.192924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.192933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.193246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.193255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.193562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.193571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.193890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.193900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.194012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.194020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.194301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.194309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.194621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.194630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.194817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.194825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.195134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.195142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.195434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.195442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.195719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.195727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.196026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.196035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.196218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.196226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.196528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.196537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.196875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.196884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.197210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.197218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.197454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.197462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.197770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.197778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.198081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.198090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.198387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.198396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.198721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.198729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.199035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.199044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.199377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.199385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.199667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.199675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.199913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.199921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.200214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.200222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.200516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.200525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.200875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.200884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.201213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.201221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.201376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.201387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.201612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.201621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.201931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.201952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.202258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.202267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.202548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.202555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.202877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.202886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.203218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.203226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.203516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.203524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.203850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.203860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.204050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.204058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.204338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.204346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.204647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.204657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.205020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.205028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.205334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.205342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.205675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.205684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.205993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.206003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.206318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.206326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.206691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.206699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.207039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.207048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.207179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.207189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.207551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.207560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.207884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.207894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.208232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.208241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.208478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.208486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.208808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.208816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.209204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.123 [2024-12-06 11:29:08.209212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.123 qpair failed and we were unable to recover it. 00:30:02.123 [2024-12-06 11:29:08.209501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.209510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.209832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.209840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.210145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.210154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.210456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.210465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.210667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.210674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.210944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.210952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.211263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.211271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.211584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.211592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.211765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.211774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.211962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.211971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.212282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.212290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.212593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.212602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.212908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.212916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.213209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.213217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.213509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.213518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.213826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.213834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.214183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.214191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.214486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.214495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.214817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.214825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.215140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.215150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.215456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.215464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.215751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.215759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.216073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.216081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.216454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.216463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.216745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.216754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.216928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.216938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.217272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.217280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.217566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.217574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.217879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.217888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.218202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.218211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.218498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.218507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.218809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.218818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.218978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.218988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.219303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.219312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.219422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.219430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.219765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.219774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.220071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.220080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.220402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.220424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.220730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.220740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.221034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.221044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.221399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.221408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.221728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.221740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.221916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.221924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.222242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.222251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.222576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.222585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.222922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.222932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.223233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.223242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.223526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.223536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.223827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.223836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.224012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.224021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.224338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.224347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.224662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.224671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.225004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.225014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.225352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.225361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.225658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.225668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.124 [2024-12-06 11:29:08.226013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.124 [2024-12-06 11:29:08.226023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.124 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.226200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.226209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.226502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.226511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.226685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.226693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.227032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.227041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.227356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.227365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.227677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.227687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.227870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.227881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.228185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.228194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.228545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.228554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.228864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.228876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.229189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.229199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.229411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.229419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.229710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.229718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.229995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.230005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.230309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.230319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.230493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.230502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.230774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.230783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.231083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.231091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.231361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.231368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.231725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.231734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.231934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.231943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.232198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.232207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.232378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.232386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.232690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.232698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.233005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.233015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.233326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.233336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.233649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.233657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.233933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.233941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.234245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.234254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.234595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.234603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.234936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.234945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.235251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.235260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.235565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.235574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.235864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.235874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.236148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.236156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.236472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.236480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.236767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.236775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.237078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.237087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.237405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.237413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.237772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.237780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.238070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.238078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.238402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.238411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.238696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.238706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.238998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.239007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.239183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.239191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.239418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.239427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.239838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.239846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.240157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.240166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.240458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.240466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.240758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.240766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.241081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.241090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.241380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.241388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.241576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.241584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.241896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.241904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.242233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.242241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.242547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.242555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.242852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.242861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.243161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.243168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.243352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.243361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.243667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.243676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.243969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.243977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.244168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.244176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.125 [2024-12-06 11:29:08.244370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.125 [2024-12-06 11:29:08.244378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.125 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.244709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.244717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.244917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.244925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.245226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.245237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.245408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.245416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.245722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.245730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.245783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.245790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.246068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.246077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.246262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.246270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.246578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.246586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.246766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.246774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.247084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.247094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.247281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.247290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.247565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.247573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.247871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.247879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.248064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.248073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.248380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.248388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.248697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.248705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.249016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.249025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.249337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.249345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.249672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.249680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.249992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.250001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.250306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.250315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.250481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.250489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.250800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.250808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.251121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.251129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.251312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.251321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.251581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.251589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.251767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.251776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.252066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.252074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.252256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.252264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.252465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.252474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.252634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.252644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.252952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.252961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.253236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.253245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.253549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.253557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.253867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.253876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.254188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.254197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.254495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.254503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.254667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.254676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.254971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.254980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.255302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.255310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.255463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.255472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.255776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.255786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.256099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.256108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.256416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.256424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.256608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.256616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.256920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.256928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.257245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.257253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.257559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.257567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.257911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.257920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.258235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.258243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.258550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.258558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.258901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.258909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.259231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.259240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.259546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.259554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.259835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.259844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.260114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.260122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.126 [2024-12-06 11:29:08.260437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.126 [2024-12-06 11:29:08.260445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.126 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.260780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.260788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.261107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.261115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.261289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.261297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.261479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.261488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.261806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.261816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.262088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.262097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.262424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.262433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.262616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.262626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.262934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.262942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.263249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.263257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.263538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.263546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.263858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.263870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.264188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.264196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.264515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.264523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.264833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.264841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.265172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.265181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.265490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.265498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.265796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.265803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.266084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.266093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.266259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.266266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.266594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.266603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.266896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.266905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.267247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.267256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.267568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.267576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.267913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.267922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.268277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.268285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.268644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.268653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.268980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.268988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.269184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.269192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.269356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.269365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.269700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.269709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.270020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.270029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.270343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.270352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.270659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.270667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.270877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.270887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.271199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.271207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.271486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.271494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.271799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.271807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.272004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.272012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.272305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.272313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.272617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.272625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.272903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.272911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.273254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.273261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.273479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.273487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.273818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.273826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.274083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.274092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.274392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.274401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.274739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.274747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.275050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.275058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.275376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.275384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.275676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.275683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.275973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.275982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.276148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.276156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.276423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.276432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.276763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.276771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.277100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.277109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.277423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.277431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.277760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.277767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.278063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.278072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.278381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.278389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.278558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.278567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.278757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.278767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.279077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.127 [2024-12-06 11:29:08.279086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.127 qpair failed and we were unable to recover it. 00:30:02.127 [2024-12-06 11:29:08.279394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-06 11:29:08.279402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.128 qpair failed and we were unable to recover it. 00:30:02.128 [2024-12-06 11:29:08.279710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-06 11:29:08.279720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.128 qpair failed and we were unable to recover it. 00:30:02.128 [2024-12-06 11:29:08.280029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-06 11:29:08.280037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.128 qpair failed and we were unable to recover it. 00:30:02.128 [2024-12-06 11:29:08.280341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-06 11:29:08.280349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.128 qpair failed and we were unable to recover it. 00:30:02.128 [2024-12-06 11:29:08.280667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-06 11:29:08.280675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.128 qpair failed and we were unable to recover it. 00:30:02.128 [2024-12-06 11:29:08.280984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-06 11:29:08.280993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.128 qpair failed and we were unable to recover it. 00:30:02.128 [2024-12-06 11:29:08.281299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.128 [2024-12-06 11:29:08.281307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.128 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.281615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.281625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.281935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.281944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.282237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.282254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.282445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.282453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.282659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.282667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.282982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.282992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.283157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.283166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.283507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.283515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.283846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.283855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.284134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.284142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.284309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.284317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.284585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.284593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.284906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.284914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.285066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.285076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.285376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.285384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.285698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.285706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.286014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.286023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.286213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.286221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.286496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.286505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.286775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.286785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.287097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.287107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.287411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.287421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.287729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.287739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.288043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.288051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.288362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.288370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.288616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.288625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.288915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.288924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.289242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.289250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.289428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.289436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.289742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.289750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.290029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.290037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.290216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.290225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.290543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.290552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.290870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.290878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.291056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.291067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.291350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.291358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.291669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.291677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.291984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.291992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.292313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.292321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.292628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.292636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.292947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.292955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.293263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.293271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.293483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.293491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.293801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.293810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.294052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.294060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.294233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.294242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.294531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.294539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.294849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.294857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.295183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.295191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.295376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.295384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.295752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.295760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.296097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.296107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.296290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.296298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.296427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.296435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.296606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.296614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.296885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.296893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.297203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.297211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.297564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.297572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.297882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.297890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.298196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.298204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.298358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.298367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.395 qpair failed and we were unable to recover it. 00:30:02.395 [2024-12-06 11:29:08.298629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.395 [2024-12-06 11:29:08.298637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.298939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.298947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.299259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.299269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.299580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.299588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.299776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.299784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.300137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.300145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.300460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.300469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.300775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.300784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.301079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.301087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.301248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.301258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.301411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.301421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.301584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.301597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.301921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.301930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.302238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.302248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.302424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.302433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.302743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.302751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.303029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.303038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.303336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.303344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.303534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.303542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.303835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.303844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.304143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.304152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.304461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.304469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.304788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.304797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.305173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.305181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.305413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.305422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.305697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.305706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.305995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.306004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.306315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.306323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.306703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.306711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.307037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.307047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.307369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.307377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.307687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.307696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.308034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.308043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.308222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.308230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.308551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.308560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.308757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.308766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.309043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.309052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.309359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.309367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.309658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.309667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.309976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.309985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.310292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.310301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.310484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.310492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.310676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.310684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.311007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.311015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.311332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.311341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.311647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.311655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.311974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.311983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.312317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.312325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.312627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.312636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.312828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.312836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.313102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.313111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.313417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.313425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.313595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.313604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.313923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.313934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.314237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.314245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.314555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.314564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.314755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.314763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.315103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.315112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.315305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.315312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.315524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.315533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.315719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.315727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.316022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.316030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.316373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.316382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.396 qpair failed and we were unable to recover it. 00:30:02.396 [2024-12-06 11:29:08.316690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.396 [2024-12-06 11:29:08.316700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.317012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.317021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.317334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.317343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.317656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.317665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.317965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.317974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.318267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.318276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.318670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.318679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.318990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.318998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.319315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.319323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.319508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.319516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.319821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.319829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.320030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.320038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.320337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.320345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.320650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.320658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.320838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.320846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.321132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.321141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.321450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.321459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.321755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.321763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.321945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.321953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.322232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.322240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.322575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.322585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.322895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.322905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.323236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.323244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.323523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.323531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.323841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.323849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.324150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.324159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.324454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.324464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.324774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.324783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.324945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.324954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.325280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.325289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.325596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.325607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.325913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.325922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.326232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.326240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.326431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.326439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.326748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.326756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.327040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.327048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.327365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.327374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.327684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.327692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.327999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.328008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.328171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.328180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.328477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.328485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.328774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.328782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.329183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.329191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.329499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.329508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.329694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.329703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.329968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.329976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.330284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.330293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.330599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.330608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.330923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.330932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.331126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.331134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.331450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.331459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.331762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.331771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.331921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.331931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.332213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.332221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.332532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.332541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.332855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.332869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.333183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.333192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.397 [2024-12-06 11:29:08.333509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.397 [2024-12-06 11:29:08.333518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.397 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.333865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.333874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.334154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.334162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.334510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.334518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.334818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.334826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.335099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.335108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.335416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.335423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.335735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.335744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.336073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.336081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.336406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.336414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.336715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.336723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.337012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.337021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.337223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.337231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.337534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.337543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.337848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.337856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.338164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.338172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.338483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.338491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.338683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.338692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.339002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.339010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.339318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.339327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.339523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.339532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.339575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.339582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.339847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.339855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.340014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.340023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.340308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.340316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.340635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.340652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.340959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.340968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.341136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.341144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.341453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.341462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.341771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.341780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.342096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.342105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.342419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.342428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.342754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.342763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.343067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.343077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.343391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.343400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.343708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.343717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.343905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.343914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.344202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.344209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.344521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.344530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.344838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.344847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.345233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.345242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.345541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.345549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.345899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.345908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.346116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.346124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.346285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.346293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.346576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.346584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.346890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.346899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.347207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.347215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.347546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.347554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.347872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.347880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.348216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.348224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.348532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.348539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.348871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.348880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.349186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.349197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.349545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.349553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.349864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.349872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.350163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.350171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.350480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.350488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.350554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.350563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.350894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.350903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.351227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.351237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.351572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.351581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.351887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.351895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.352074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.352082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.352414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.398 [2024-12-06 11:29:08.352423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.398 qpair failed and we were unable to recover it. 00:30:02.398 [2024-12-06 11:29:08.352593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.352603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.352913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.352921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.353114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.353122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.353417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.353425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.353759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.353768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.353808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.353815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.354132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.354141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.354332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.354341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.354660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.354668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.354873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.354881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.355193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.355201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.355513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.355522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.355741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.355750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.356057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.356066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.356380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.356388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.356698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.356706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.357016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.357025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.357340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.357348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.357659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.357667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.357979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.357988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.358308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.358317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.358608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.358616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.358924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.358932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.359119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.359127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.359339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.359347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.359650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.359658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.359965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.359973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.360129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.360137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.360410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.360418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.360712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.360720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.361016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.361025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.361336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.361345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.361646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.361655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.361997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.362005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.362306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.362314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.362483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.362492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.362721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.362729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.363050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.363059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.363367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.363377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.363691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.363699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.364005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.364013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.364320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.364328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.364640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.364648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.364955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.364963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.365283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.365292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.365583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.365591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.365761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.365769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.366079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.366088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.366394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.366402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.366703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.366711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.367033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.367042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.367360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.367368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.367672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.367681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.367975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.367983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.368291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.368299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.368611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.368621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.368952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.368961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.369297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.369306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.369483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.369491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.369767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.369775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.370096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.370104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.370398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.370406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.370704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.370713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.370975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.399 [2024-12-06 11:29:08.370983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.399 qpair failed and we were unable to recover it. 00:30:02.399 [2024-12-06 11:29:08.371284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.371292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.371462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.371471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.371655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.371663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.371926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.371935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.372112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.372120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.372313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.372322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.372593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.372602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.372918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.372927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.373125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.373133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.373336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.373344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.373671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.373680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.373844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.373852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.374169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.374177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.374466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.374474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.374744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.374752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.375083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.375091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.375290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.375297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.375454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.375462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.375662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.375671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.375978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.375986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.376296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.376304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.376630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.376639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.376957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.376966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.377276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.377284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.377586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.377594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.377884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.377892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.378208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.378217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.378412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.378420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.378733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.378742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.379028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.379037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.379355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.379363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.379695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.379706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.380029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.380037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.380364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.380373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.380570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.380578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.380841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.380850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.381145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.381154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.381442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.381450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.381749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.381758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.382038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.382046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.382365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.382373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.382712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.382721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.383030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.383039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.383361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.383369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.383681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.383690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.383986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.383995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.384303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.384311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.384473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.384481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.384779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.384789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.385088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.385097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.385405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.385413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.385554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.385563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.385885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.385893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.386194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.386202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.386500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.386508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.386815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.386823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.387132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.387141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.387501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.387512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.387816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.400 [2024-12-06 11:29:08.387824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.400 qpair failed and we were unable to recover it. 00:30:02.400 [2024-12-06 11:29:08.388132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.388140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.388328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.388336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.388626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.388634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.388806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.388814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.389011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.389019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.389221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.389230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.389562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.389570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.389841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.389849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.390166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.390174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.390495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.390504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.390818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.390827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.391135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.391144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.391459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.391478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.391790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.391799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.392110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.392118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.392401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.392409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.392580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.392590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.392920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.392928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.393109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.393117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.393436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.393444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.393786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.393795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.394100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.394109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.394434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.394443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.394744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.394752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.395031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.395040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.395352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.395360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.395664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.395673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.395986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.395994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.396310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.396318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.396625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.396633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.396951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.396959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.397275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.397283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.397588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.397596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.397870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.397880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.398080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.398088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.398374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.398382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.398572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.398579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.398894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.398904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.399211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.399219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.399525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.399533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.399839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.399847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.400163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.400171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.400481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.400490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.400803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.400812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.401125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.401133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.401326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.401334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.401645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.401654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.401972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.401981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.402290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.402298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.402647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.402656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.402931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.402940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.403117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.403126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.403421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.403431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.403745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.403761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.404078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.404087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.404387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.404395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.404602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.404610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.404925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.404935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.405252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.405260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.405568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.405576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.405883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.405892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.406164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.406172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.406464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.406472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.406791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.406800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.401 [2024-12-06 11:29:08.407016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.401 [2024-12-06 11:29:08.407024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.401 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.407368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.407377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.407682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.407691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.407893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.407902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.408211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.408220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.408519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.408527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.408809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.408817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.409087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.409095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.409422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.409431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.409632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.409642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.409971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.409979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.410308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.410317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.410624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.410632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.410939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.410947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.411261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.411270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.411654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.411663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.411966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.411974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.412283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.412291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.412636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.412645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.412807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.412816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.413133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.413142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.413467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.413475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.413682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.413691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.413875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.413884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.414159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.414167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.414475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.414482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.414779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.414787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.414975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.414983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.415286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.415296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.415602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.415610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.415906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.415914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.416234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.416242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.416451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.416459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.416648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.416656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.416970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.416978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.417287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.417295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.417602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.417611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.417922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.417930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.418262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.418271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.418569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.418577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.418887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.418895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.419223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.419231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.419561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.419570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.419876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.419885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.420189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.420197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.420485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.420492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.420809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.420818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.420978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.420987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.421297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.421306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.421633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.421643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.421934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.421943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.422262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.422271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.422455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.422464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.422775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.422784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.422971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.422981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.423159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.423168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.423454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.423463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.423768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.423777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.424070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.424078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.424386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.424394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.424703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.424711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.425021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.425029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.425211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.425219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.425433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.425441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.402 [2024-12-06 11:29:08.425794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.402 [2024-12-06 11:29:08.425802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.402 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.425978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.425987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.426275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.426284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.426478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.426486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.426801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.426810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.427004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.427014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.427196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.427204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.427515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.427524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.427826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.427834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.428108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.428117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.428458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.428467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.428781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.428790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.429100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.429110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.429390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.429400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.429679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.429688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.429995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.430004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.430322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.430330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.430499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.430508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.430801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.430809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.430992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.431000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.431333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.431342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.431647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.431656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.431948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.431957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.432274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.432282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.432591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.432599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.432904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.432912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.433246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.433255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.433559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.433568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.433898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.433907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.434232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.434241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.434570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.434578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.434875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.434884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.435186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.435195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.435501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.435510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.435695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.435704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.436014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.436022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.436359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.436368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.436677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.436685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.436996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.437006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.437367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.437375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.437549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.437558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.437889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.437898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.438194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.438202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.438511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.438519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.438827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.438839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.439143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.439152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.439308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.439315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.439618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.439627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.439935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.439944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.440145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.440153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.440314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.440322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.440558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.440567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.440870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.440879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.441187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.441196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.441540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.441548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.441856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.441876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.442200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.442209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.442516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.442526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.442813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.442822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.443135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.443144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.403 [2024-12-06 11:29:08.443452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.403 [2024-12-06 11:29:08.443460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.403 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.443774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.443783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.444095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.444111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.444409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.444417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.444727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.444735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.445032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.445040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.445234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.445242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.445570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.445580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.445769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.445779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.446030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.446040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.446355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.446364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.446664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.446673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.446979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.446988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.447300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.447310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.447587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.447597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.447911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.447920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.448244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.448252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.448565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.448575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.448877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.448887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.449199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.449207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.449493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.449501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.449813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.449822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.450155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.450164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.450470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.450480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.450782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.450793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.451118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.451127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.451460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.451469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.451780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.451790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.452108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.452117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.452391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.452399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.452772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.452781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.453107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.453116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.453432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.453441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.453741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.453750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.454062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.454072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.454396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.454405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.454713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.454722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.455027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.455036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.455235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.455243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.455506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.455516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.455826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.455835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.456141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.456150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.456335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.456345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.456654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.456663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.456973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.456982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.457139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.457147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.457328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.457336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.457653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.457663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.457977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.457986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.458338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.458346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.458675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.458684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.458992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.459001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.459317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.459325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.459630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.459639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.459967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.459977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.460283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.460293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.460484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.460493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.460807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.460817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.461006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.461015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.461341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.461350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.461621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.461629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.461927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.461936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.462250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.462259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.462569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.462578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.462885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.404 [2024-12-06 11:29:08.462895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.404 qpair failed and we were unable to recover it. 00:30:02.404 [2024-12-06 11:29:08.463222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.463231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.463527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.463535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.463826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.463834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.464131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.464139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.464446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.464455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.464741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.464750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.465075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.465084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.465391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.465400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.465671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.465679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.465969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.465978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.466309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.466317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.466627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.466635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.466936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.466946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.467139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.467148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.467513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.467558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.467776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.467791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.468114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.468128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.468461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.468472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.468795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.468808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.469118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.469131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.469326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.469337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.469668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.469681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.470009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.470021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.470335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.470347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.470654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.470666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.470867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.470878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.471165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.471183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.471505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.471519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.471834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.471845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.472242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.472255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.472562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.472574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.472881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.472892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.473305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.473316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.473649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.473661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.473972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.473984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.474294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.474306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.474613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.474624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.474929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.474940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.475254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.475265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.475459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.475472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.475806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.475818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.476017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.476030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.476361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.476373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.476685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.476698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.477023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.477035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.477375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.477388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.477722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.477734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.477970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.477981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.478313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.478324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.478656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.478668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.479006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.479020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.479320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.479333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.479635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.479647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.479977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.479991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.480318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.480330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.480512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.480523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.480832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.480844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.481178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.481190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.481519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.481530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.481834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.405 [2024-12-06 11:29:08.481846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.405 qpair failed and we were unable to recover it. 00:30:02.405 [2024-12-06 11:29:08.482185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.482197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.482396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.482408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.482699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.482710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.482893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.482905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.483185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.483197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.483505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.483518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.483884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.483897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.484198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.484211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.484510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.484522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.484795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.484807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.485105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.485116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.485424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.485436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.485746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.485758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.486075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.486087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.486394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.486405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.486706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.486718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.487028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.487040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.487348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.487360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.487688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.487700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.488010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.488023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.488207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.488217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.488543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.488555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.488886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.488897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.489226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.489238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.489542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.489554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.489888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.489900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.490228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.490240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.490445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.490456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.490764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.490776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.490971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.490983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.491291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.491303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.491611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.491623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.491953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.491965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.492303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.492315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.492487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.492499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.492724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.492736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.493053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.493065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.493354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.493367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.493681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.493692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.494004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.494016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.494328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.494341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.494674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.494686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.494900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.494911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.495266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.495278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.495590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.495601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.495936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.495948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.496263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.496276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.496621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.496633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.496989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.497001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.497223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.497233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.497419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.497429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.497772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.497784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.497974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.497986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.498279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.498291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.498628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.498640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.498946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.498958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.499269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.499282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.499619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.499631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.499942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.499955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.500239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.500251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.500580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.500592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.500896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.500913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.501232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.501244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.501556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.501568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.501906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.406 [2024-12-06 11:29:08.501918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.406 qpair failed and we were unable to recover it. 00:30:02.406 [2024-12-06 11:29:08.502221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.502232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.502533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.502544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.502819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.502830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.503149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.503162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.503500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.503511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.503833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.503845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.504174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.504187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.504481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.504493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.504812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.504824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.505009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.505022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.505323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.505334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.505650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.505662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.505980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.505992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.506213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.506223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.506555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.506567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.506897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.506910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.507239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.507250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.507579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.507590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.507903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.507915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.508091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.508105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.508331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.508342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.508637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.508649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.508963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.508974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.509292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.509305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.509642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.509653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.509984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.509995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.510330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.510341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.510671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.510682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.511016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.511028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.511359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.511370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.511684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.511696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.512023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.512035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.512333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.512345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.512647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.512658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.512842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.512853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.513169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.513180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.513369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.513379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.513590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.513602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.513929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.513951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.514274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.514285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.514623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.514633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.514935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.514946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.515271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.515282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.515629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.515640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.515928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.515939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.516247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.516258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.516566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.516578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.516756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.516769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.517066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.517078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.517379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.517390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.517695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.517709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.518018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.518030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.518334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.518346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.518678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.518689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.518994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.519005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.519313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.519324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.519659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.519671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.520038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.520049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.520355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.520366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.520520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.520531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.520820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.520831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.521140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.521152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.521457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.521467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.521780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.521791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.522157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.522168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.407 qpair failed and we were unable to recover it. 00:30:02.407 [2024-12-06 11:29:08.522471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.407 [2024-12-06 11:29:08.522482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.522703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.522714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.523025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.523037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.523219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.523231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.523555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.523568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.523915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.523926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.524227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.524239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.524578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.524589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.524892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.524903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.525230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.525241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.525429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.525440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.525769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.525780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.526080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.526092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.526422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.526433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.526720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.526731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.527034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.527045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.527356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.527368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.527682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.527694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.527999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.528012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.528326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.528337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.528667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.528678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.528973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.528984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.529191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.529202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.529549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.529560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.529839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.529850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.530048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.530060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.530369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.530380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.530692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.530703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.530869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.530880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.531170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.531182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.531485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.531497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.531795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.531806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.532126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.532137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.532480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.532492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.532794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.532806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.533108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.533120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.533429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.533440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.533672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.533683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.534009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.534021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.534309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.534319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.534632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.534644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.534962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.534974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.535317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.535328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.535516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.535526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.535878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.535890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.536044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.536055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.536228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.536242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.536549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.536560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.536883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.536894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.537160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.537171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.537479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.537491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.537832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.537843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.538229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.538241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.538474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.538487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.538817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.538827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.539129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.539140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.539339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.539349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.539680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.539691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.540003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.540015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.540319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.540331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.540645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.408 [2024-12-06 11:29:08.540656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.408 qpair failed and we were unable to recover it. 00:30:02.408 [2024-12-06 11:29:08.540860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.540874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.541190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.541202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.541534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.541545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.541861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.541882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.542183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.542194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.542505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.542516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.542848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.542860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.543212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.543224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.543537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.543549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.543858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.543874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.544094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.544105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.544435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.544446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.544741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.544752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.545078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.545090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.545423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.545434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.545738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.545748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.546035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.546046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.546364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.546375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.546744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.546755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.547096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.547110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.547426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.547437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.547492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.547502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.547790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.547802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.548113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.548125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.548437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.548449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.548778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.548789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.549096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.549108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.549393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.549404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.549706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.549718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.550026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.550038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.550241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.550252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.550529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.550540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.550864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.550875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.551204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.551216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.551515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.551527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.551865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.551877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.552175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.552187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.552375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.552386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.552719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.552732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.553058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.553070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.409 [2024-12-06 11:29:08.553372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.409 [2024-12-06 11:29:08.553383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.409 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.553720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.553732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.554034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.554046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.554349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.554360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.554661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.554673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.555015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.555027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.555395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.555406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.555745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.555756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.556110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.556122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.556454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.556466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.556783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.556796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.557111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.557124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.557454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.557466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.557793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.557805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.558122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.558134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.558469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.558481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.558642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.558654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.558977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.558988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.559284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.559294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.559628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.559640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.559969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.559980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.560286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.560298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.560486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.560498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.560787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.560799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.561108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.561121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.561451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.561462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.561782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.561795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.562106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.562117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.562423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.562433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.562723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.562733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.563043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.563054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.563374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.563385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.563682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.563693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.564001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.564013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.564207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.564219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.564512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.564524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.564856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.564870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.565174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.565186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.565521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.565533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.565877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.565890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.566163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.566174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.566488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.566498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.682 [2024-12-06 11:29:08.566835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.682 [2024-12-06 11:29:08.566846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.682 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.567192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.567205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.567393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.567404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.567720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.567731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.568028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.568039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.568319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.568333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.568649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.568660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.569046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.569057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.569227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.569238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.569548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.569559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.569890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.569903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.570200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.570212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.570555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.570567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.570873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.570884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.571044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.571055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.571374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.571385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.571559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.571570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.571860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.571875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.572213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.572225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.572556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.572567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.572875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.572886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.573108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.573119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.573432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.573442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.573748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.573760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.574102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.574114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.574396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.574407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.574700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.574713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.574901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.574913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.575191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.575202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.575534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.575546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.575847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.575858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.576208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.576219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.576408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.576421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.576715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.576725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.577033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.577045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.577318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.577328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.577592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.577603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.577886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.577898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.578216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.578228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.578559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.578570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.578897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.578908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.579212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.579224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.579533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.579544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.579848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.579860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.580165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.580176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.580454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.580464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.580812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.580824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.581122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.581134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.581463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.581474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.581808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.581820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.582098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.582109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.582419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.582431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.582737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.582749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.583078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.583090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.583432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.583445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.583760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.583772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.584080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.584092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.584276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.584289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.584617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.584630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.584933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.584947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.585236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.585247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.585582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.585594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.585896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.585907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.586231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.586242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.683 qpair failed and we were unable to recover it. 00:30:02.683 [2024-12-06 11:29:08.586546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.683 [2024-12-06 11:29:08.586556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.586892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.586903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.587220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.587231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.587564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.587575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.587867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.587878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.588192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.588203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.588592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.588603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.588961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.588974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.589144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.589155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.589522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.589534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.589844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.589855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.590190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.590202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.590515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.590526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.590812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.590823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.591123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.591134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.591433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.591444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.591751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.591761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.592033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.592044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.592333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.592344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.592652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.592664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.592967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.592978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.593286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.593297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.593632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.593643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.593838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.593849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.594145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.594157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.594483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.594495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.594802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.594814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.595161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.595172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.595478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.595490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.595828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.595840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.596193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.596205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.596575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.596586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.596895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.596906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.597073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.597086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.597407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.597418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.597731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.597742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.598072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.598083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.598419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.598431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.598740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.598751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.599092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.599105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.599420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.599431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.599710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.599721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.600032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.600043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.600265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.600276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.600602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.600614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.600947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.600958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.601271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.601281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.601591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.601602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.601943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.601954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.602255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.602266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.602593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.602604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.602895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.602906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.603227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.603238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.603575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.603586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.603888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.603899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.604113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.604124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.604406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.604416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.604726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.604737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.605034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.605045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.605355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.605366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.605675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.605686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.606023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.606034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.606216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.606226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.606544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.606557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.684 qpair failed and we were unable to recover it. 00:30:02.684 [2024-12-06 11:29:08.606843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.684 [2024-12-06 11:29:08.606854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.607048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.607061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.607250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.607261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.607446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.607456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.607744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.607756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.608081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.608093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.608396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.608407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.608715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.608727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.609029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.609040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.609324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.609335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.609637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.609648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.609958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.609973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.610280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.610291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.610628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.610639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.610804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.610817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.611144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.611155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.611455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.611466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.611770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.611781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.611962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.611973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.612277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.612288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.612594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.612605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.612802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.612813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.613135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.613147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.613475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.613487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.613789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.613799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.614135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.614147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.614460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.614473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.614784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.614795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.615121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.615133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.615472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.615484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.615790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.615802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.616004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.616016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.616334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.616347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.616692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.616704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.617015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.617026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.617337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.617349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.617523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.617534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.617839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.617850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.618171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.618183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.618492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.618503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.618836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.618848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.619172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.619192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.619341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.619352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.619685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.619695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.620003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.620015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.620322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.620333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.620645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.620657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.620984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.620995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.621307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.621320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.621661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.621672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.621982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.621993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.622298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.622309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.622613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.622623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.622937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.622948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.623132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.623142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.623469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.623480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.623791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.623802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.624177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.685 [2024-12-06 11:29:08.624189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.685 qpair failed and we were unable to recover it. 00:30:02.685 [2024-12-06 11:29:08.624498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.624509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.624814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.624826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.625129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.625141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.625474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.625486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.625817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.625829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.625991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.626004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.626191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.626203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.626515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.626527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.626860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.626875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.627211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.627222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.627527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.627539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.627725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.627738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.628023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.628035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.628363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.628374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.628589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.628599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.628911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.628923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.629138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.629149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.629466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.629477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.629787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.629799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.630145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.630157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.630459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.630470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.630670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.630681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.630996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.631008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.631347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.631358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.631663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.631674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.632012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.632024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.632327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.632339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.632553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.632564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.632839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.632850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.633044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.633056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.633382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.633393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.633688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.633699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.634016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.634028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.634325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.634337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.634643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.634655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.634970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.634981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.635295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.635309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.635625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.635636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.635796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.635806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.635983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.635995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.636330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.636342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.636647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.636658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.636978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.636990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.637324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.637335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.637722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.637734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.638059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.638070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.638379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.638391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.638719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.638730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.639073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.639085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.639394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.639405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.639712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.639722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.639922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.639933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.640143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.640154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.640460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.640471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.640782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.640793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.640981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.640992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.641323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.641334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.641638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.641649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.641974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.641985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.642270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.642281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.642585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.642596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.642777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.642788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.643075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.643086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.643370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.643383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.643716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.686 [2024-12-06 11:29:08.643727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.686 qpair failed and we were unable to recover it. 00:30:02.686 [2024-12-06 11:29:08.644144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.644155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.644466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.644478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.644812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.644823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.645141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.645153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.645337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.645349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.645637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.645649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.645947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.645958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.646303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.646314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.646624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.646636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.646974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.646985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.647319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.647331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.647641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.647652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.647963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.647974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.648298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.648308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.648606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.648617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.648919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.648931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.649222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.649234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.649461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.649473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.649806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.649817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.650149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.650161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.650510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.650521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.650854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.650874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.651180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.651191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.651496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.651507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.651846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.651857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.652195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.652209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.652507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.652518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.652715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.652726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.653059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.653071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.653399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.653411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.653694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.653705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.653894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.653905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.654174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.654186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.654503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.654514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.654794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.654804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.655109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.655121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.655423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.655435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.655767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.655778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.656094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.656105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.656308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.656319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.656513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.656523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.656878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.656890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.657225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.657236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.657540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.657551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.657892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.657904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.658206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.658217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.658519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.658531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.658712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.658724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.659068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.659080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.659385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.659396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.659690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.659700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.660017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.660028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.660351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.660362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.660682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.660693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.661022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.661033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.661318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.661328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.661549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.661560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.661959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.661971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.662298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.662309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.662502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.662514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.687 qpair failed and we were unable to recover it. 00:30:02.687 [2024-12-06 11:29:08.662822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.687 [2024-12-06 11:29:08.662832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.663142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.663154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.663492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.663503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.663803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.663814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.664130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.664141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.664368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.664379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.664707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.664718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.665030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.665041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.665351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.665363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.665654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.665665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.665998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.666009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.666311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.666323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.666665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.666677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.666923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.666934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.667263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.667274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.667578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.667589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.667888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.667901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.668212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.668223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.668279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.668288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.668575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.668586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.668895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.668908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.669241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.669253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.669552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.669563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.669897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.669909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.670104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.670115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.670436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.670446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.670774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.670785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.671084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.671094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.671291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.671302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.671515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.671526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.671787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.671799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.672086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.672098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.672471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.672482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.672813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.672826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.673102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.673113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.673421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.673432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.673768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.673780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.674093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.674105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.674416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.674428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.674726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.674737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.674962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.674974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.675313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.675325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.675631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.675643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.675975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.675986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.676359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.676370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.676677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.676690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.676820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.676832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.677069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.677080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.677413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.677425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.677756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.677767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.677991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.678003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.678308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.678319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.678634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.678646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.678946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.678957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.679234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.679246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.679560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.679572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.679876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.679887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.680245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.680257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.680556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.680568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.680910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.680921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.681233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.681246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.681580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.681591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.681911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.681923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.682224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.682235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.682535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.688 [2024-12-06 11:29:08.682548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.688 qpair failed and we were unable to recover it. 00:30:02.688 [2024-12-06 11:29:08.682879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.682891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.683149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.683161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.683514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.683526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.683827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.683839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.684161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.684172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.684521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.684533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.684869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.684880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.685186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.685198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.685507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.685518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.685834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.685846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.686186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.686199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.686492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.686504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.686815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.686827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.687000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.687013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.687326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.687338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.687649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.687662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.687976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.687989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.688299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.688311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.688650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.688662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.688978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.688992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.689208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.689221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.689481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.689494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.689823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.689835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.690144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.690156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.690486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.690498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.690804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.690816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.690997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.691010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.691333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.691346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.691528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.691541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.691709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.691722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.692052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.692064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.692376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.692388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.692691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.692704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.693032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.693044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.693333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.693345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.693689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.693702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.694016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.694028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.694403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.694416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.694622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.694635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.694949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.694961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.695142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.695153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.695475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.695486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.695785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.695796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.696123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.696134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.696463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.696475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.696788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.696799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.697146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.697158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.697494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.697506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.697816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.697828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.698169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.698182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.698484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.698496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.698728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.698739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.699040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.699051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.699242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.699253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.699545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.699557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.699868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.699881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.700211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.700222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.700528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.700541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.700868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.700879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.701271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.701282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.701635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.701647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.701955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.701966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.702301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.702313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.702626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.702640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.702972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.702984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.689 qpair failed and we were unable to recover it. 00:30:02.689 [2024-12-06 11:29:08.703313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.689 [2024-12-06 11:29:08.703324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.703638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.703650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.703949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.703961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.704290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.704302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.704628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.704641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.704980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.704992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.705295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.705307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.705605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.705618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.705784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.705796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.706127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.706139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.706463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.706476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.706650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.706662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.706845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.706857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.707053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.707064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.707284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.707295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.707505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.707517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.707828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.707840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.708214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.708227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.708480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.708492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.708814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.708826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.709125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.709139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.709469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.709481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.709788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.709800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.709991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.710003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.710275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.710286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.710556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.710570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.710909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.710920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.711021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.711033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.711355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.711366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.711665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.711678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.711985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.711997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.712332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.712345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.712653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.712665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.713003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.713015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.713343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.713355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.713548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.713558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.713883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.713895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.714219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.714230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.714532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.714543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.715388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.715414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.715727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.715739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.716046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.716058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.716367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.716380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.716690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.716701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.717006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.717018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.717300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.717312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.717650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.717662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.717994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.718006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.718326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.718338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.718677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.718688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.718858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.718873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.719192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.719204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.719537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.719550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.719793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.719805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.720174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.720187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.720517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.720529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.720890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.720903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.721201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.721212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.721533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.721545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.721726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.721740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.722072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.690 [2024-12-06 11:29:08.722083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.690 qpair failed and we were unable to recover it. 00:30:02.690 [2024-12-06 11:29:08.722412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.722424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.722641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.722652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.722952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.722964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.723272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.723284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.723469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.723481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.723796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.723808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.724156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.724168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.724490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.724501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.724786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.724797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.725106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.725117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.725450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.725463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.725639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.725651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.725949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.725960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.726261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.726273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.726572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.726583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.726898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.726910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.727253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.727266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.727605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.727616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.727923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.727935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.728244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.728256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.728593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.728604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.728933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.728945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.729260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.729272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.729581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.729593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.729881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.729893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.730284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.730296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.730597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.730619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.730978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.730991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.731165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.731176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.731395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.731405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.731699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.731711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.732018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.732029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.732316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.732331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.732638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.732650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.732962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.732975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.733174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.733187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.733486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.733499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.733802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.733814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.734128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.734142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.734472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.734485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.734817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.734830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.735162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.735176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.735511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.735523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.735703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.735716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.736027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.736039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.736234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.736245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.736569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.736582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.736796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.736807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.737124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.737138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.737467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.737479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.737812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.737825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.738157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.738170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.738362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.738374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.738593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.738604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.738983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.738996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.739301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.739313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.739594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.739606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.739939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.739951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.740260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.740272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.740579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.740593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.740921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.740934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.741247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.741259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.741554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.741565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.741836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.741847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.742037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.742048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.691 [2024-12-06 11:29:08.742374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.691 [2024-12-06 11:29:08.742386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.691 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.742720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.742732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.743072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.743084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.743423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.743434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.743736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.743747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.744078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.744090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.744277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.744289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.744574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.744586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.744916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.744928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.745228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.745241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.745541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.745553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.745737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.745749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.746080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.746092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.746437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.746448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.746751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.746762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.747070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.747082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.747431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.747443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.747751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.747763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.748075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.748087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.748420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.748433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.748768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.748780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.749095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.749110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.749410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.749422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.749759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.749771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.750098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.750110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.750440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.750452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.750755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.750767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.751072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.751084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.751402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.751415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.751782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.751794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.752096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.752108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.752442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.752454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.752626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.752639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.752919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.752930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.753257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.753269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.753606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.753617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.753927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.753938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.754248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.754259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.754642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.754653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.754960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.754973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.755300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.755312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.755612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.755623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.755939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.755950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.756279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.756290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.756599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.756610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.756941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.756954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.757282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.757293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.757622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.757633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.757976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.757988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.758319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.758330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.758641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.758652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.758956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.758968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.759154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.759165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.759465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.759476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.759788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.759800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.760135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.760146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.760476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.760488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.760787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.760798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.761130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.761142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.761471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.761483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.761776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.761788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.762050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.762062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.762342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.762354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.762538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.762550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.762709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.762721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.763015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.692 [2024-12-06 11:29:08.763027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.692 qpair failed and we were unable to recover it. 00:30:02.692 [2024-12-06 11:29:08.763328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.763339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.763676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.763687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.763872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.763884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.764256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.764267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.764565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.764578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.764907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.764919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.765258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.765269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.765548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.765559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.765850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.765873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.766181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.766193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.766504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.766515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.766700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.766711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.766997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.767009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.767349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.767360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.767669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.767680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.768086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.768097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.768399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.768412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.768759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.768770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.769091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.769102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.769411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.769423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.769730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.769742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.770084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.770095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.770467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.770479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.770543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.770557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.770842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.770854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.771225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.771237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.771565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.771576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.771880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.771892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.772227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.772238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.772545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.772557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.772895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.772907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.773233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.773246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.773542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.773554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.773872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.773884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.774171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.774183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.774502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.774514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.774821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.774832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.775134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.775146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.775479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.775490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.775799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.775810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.776110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.776122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.776453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.776465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.776801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.776812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.777060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.777070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.777375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.777386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.777688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.777700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.778029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.778040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.778340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.778350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.778653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.778665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.778964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.778975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.779320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.779333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.779635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.779646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.779980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.779991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.780301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.780313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.780516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.780527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.780698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.780708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.780909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.780919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.781235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.781246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.781582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.781593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.781906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.781917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.693 [2024-12-06 11:29:08.782227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.693 [2024-12-06 11:29:08.782239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.693 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.782419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.782432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.782718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.782729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.783029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.783041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.783358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.783370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.783671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.783683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.783873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.783884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.784185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.784196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.784493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.784504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.784840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.784850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.785068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.785079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.785392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.785404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.785703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.785715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.786024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.786035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.786375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.786387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.786721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.786732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.787040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.787052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.787433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.787446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.787753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.787763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.788087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.788099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.788398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.788409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.788702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.788713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.788994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.789004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.789331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.789342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.789642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.789652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.789834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.789847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.790199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.790210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.790519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.790529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.790714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.790725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.791002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.791013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.791353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.791364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.791654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.791665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.791977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.791988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.792163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.792173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.792533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.792544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.792717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.792729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.792933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.792944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.793226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.793238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.793452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.793463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.793771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.793781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.794087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.794098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.794412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.794423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.794704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.794714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.795013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.795026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.795211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.795224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.795546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.795558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.795857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.795871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.796157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.796168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.796508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.796519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.796819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.796838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.797137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.797148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.797450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.797460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.797768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.797780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.798094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.798106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.798402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.798413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.798693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.798704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.799012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.799023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.799366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.799378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.799706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.799719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.800027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.800038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.800372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.800383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.800685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.800696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.800918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.800929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.801236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.801247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.801545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.801556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.694 qpair failed and we were unable to recover it. 00:30:02.694 [2024-12-06 11:29:08.801857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.694 [2024-12-06 11:29:08.801871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.802171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.802182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.802385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.802396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.802720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.802731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.802908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.802919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.803315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.803326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.803639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.803650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.804005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.804016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.804322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.804334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.804653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.804663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.804843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.804853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.805135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.805147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.805466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.805476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.805669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.805679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.805952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.805963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.806273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.806293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.806617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.806628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.806968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.806981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.807165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.807176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.807497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.807508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.807811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.807824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.808157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.808169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.808556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.808566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.808868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.808880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.809251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.809262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.809593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.809605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.809915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.809926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.810205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.810216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.810467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.810478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.810788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.810799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.811132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.811144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.811461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.811471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.811809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.811820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.812121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.812133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.812336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.812347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.812539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.812551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.812878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.812889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.813201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.813211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.813510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.813521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.813712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.813722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.814013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.814025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.814335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.814347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.814657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.814668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.814977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.814988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.815344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.815355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.815688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.815700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.816005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.816017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.816325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.816338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.816646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.816657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.816936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.816947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.817254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.817265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.817575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.817587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.817898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.817912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.818155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.818166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.818475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.818486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.818803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.818815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.819114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.819125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.819305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.819316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.819610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.819621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.695 [2024-12-06 11:29:08.819934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.695 [2024-12-06 11:29:08.819945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.695 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.820274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.820285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.820622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.820634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.820936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.820948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.821264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.821276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.821588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.821599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.821903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.821914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.822217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.822228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.822554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.822566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.822869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.822881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.823212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.823224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.823408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.823420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.823691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.823701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.824004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.824016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.824353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.824365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.824697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.824709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.825079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.825091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.825390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.825401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.825735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.825746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.826078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.826089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.826412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.826423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.826613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.826624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.826787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.826798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.827130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.827142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.827443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.827454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.827772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.827783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.828085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.828098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.828428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.828440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.828742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.828753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.829092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.829104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.829440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.829452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.829782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.829797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.830115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.830127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.830458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.830470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.830808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.830819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.831137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.831149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.831449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.831461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.831792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.831804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.832104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.832115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.832420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.832431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.832768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.832780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.832961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.832973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.833308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.833319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.833523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.833535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.833865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.833878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.834184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.834195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.834473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.834484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.834819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.834830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.835134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.835146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.696 [2024-12-06 11:29:08.835320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.696 [2024-12-06 11:29:08.835331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.696 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.835649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.835662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.835965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.835976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.836245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.836256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.836561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.836572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.836853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.836866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.837194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.837205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.837515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.837529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.837857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.837871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.838174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.838185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.838315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.838327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.838533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.838545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.838881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.838893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.839221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.839233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.839534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.839546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.839850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.839861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.840181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.840192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.840523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.840535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.840865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.840877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.841207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.841219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.841550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.841562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.841898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.841910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.842231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.842242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.842415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.842425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.842715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.842727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.843057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.843069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.843370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.843381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.843665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.843676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.843977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.843988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.844327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.844339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.844529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.844540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.844712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.965 [2024-12-06 11:29:08.844724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.965 qpair failed and we were unable to recover it. 00:30:02.965 [2024-12-06 11:29:08.845011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.845023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.845315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.845327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.845510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.845523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.845788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.845800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.846104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.846116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.846301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.846313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.846645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.846657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.846990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.847002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.847309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.847321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.847650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.847661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.848033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.848045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.848344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.848355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.848670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.848681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.849020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.849031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.849360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.849371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.849571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.849582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.849882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.849894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.850193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.850205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.850537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.850550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.850875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.850888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.851169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.851181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.851501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.851512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.851689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.851699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.852038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.852050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.852379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.852390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.852723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.852734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.853053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.853065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.853393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.853405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.853706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.853719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.854055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.854070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.854259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.854270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.854551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.854562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.854909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.854922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.855231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.855242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.855542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.855552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.855866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.855877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.856182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.856194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.856531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.856542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.966 qpair failed and we were unable to recover it. 00:30:02.966 [2024-12-06 11:29:08.856849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.966 [2024-12-06 11:29:08.856864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.857206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.857218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.857524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.857536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.857871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.857884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.858179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.858190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.858514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.858525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.858827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.858838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.859143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.859155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.859478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.859489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.859671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.859681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.859992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.860003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.860342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.860353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.860682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.860694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.860994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.861006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.861315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.861327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.861608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.861619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.861931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.861943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.862225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.862236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.862563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.862575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.862910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.862921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.863226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.863237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.863546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.863558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.863931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.863943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.864235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.864246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.864553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.864564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.864871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.864884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.865188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.865199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.865529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.865542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.865713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.865725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.866020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.866031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.866315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.866326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.866513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.866525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.866796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.866809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.867127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.867139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.867448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.867459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.867745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.867756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.868033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.868044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.868352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.868364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.967 qpair failed and we were unable to recover it. 00:30:02.967 [2024-12-06 11:29:08.868666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.967 [2024-12-06 11:29:08.868686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.869015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.869026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.869223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.869233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.869533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.869544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.869849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.869861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.870203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.870214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.870543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.870555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.870887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.870898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.871237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.871248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.871505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.871515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.871839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.871850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.872044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.872055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.872214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.872226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.872715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.872818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0780000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.873337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.873428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0780000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.873826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.873877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0780000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.874344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.874434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0780000b90 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.874794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.874807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.875123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.875134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.875438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.875450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.875784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.875795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.876100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.876114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.876305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.876318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.876655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.876666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.877003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.877015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.877351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.877363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.877647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.877659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.877743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.877753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.878051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.878063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.878377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.878388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.878722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.878733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.879075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.879086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.879388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.879399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.879591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.879603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.879909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.879921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.880236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.880248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.880547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.880559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.880868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.880881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.968 [2024-12-06 11:29:08.881042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.968 [2024-12-06 11:29:08.881054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.968 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.881383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.881395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.881696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.881707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.882019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.882030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.882351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.882363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.882698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.882709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.883019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.883031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.883369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.883380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.883633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.883643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.883938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.883949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.884280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.884294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.884612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.884624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.884960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.884972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.885309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.885321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.885639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.885651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.885981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.885993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.886299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.886309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.886611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.886622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.886916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.886928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.887263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.887274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.887570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.887582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.887909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.887920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.888233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.888245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.888568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.888579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.888910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.888923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.889271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.889282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.889582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.889594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.889934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.889946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.890175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.890187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.890470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.890482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.890850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.890867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.891194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.891205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.891508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.891520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.891853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.969 [2024-12-06 11:29:08.891867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.969 qpair failed and we were unable to recover it. 00:30:02.969 [2024-12-06 11:29:08.892163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.892176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.892509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.892520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.892845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.892856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.893209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.893223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.893593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.893604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.893918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.893930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.894127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.894139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.894448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.894459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.894519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.894531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.894823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.894835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.895144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.895155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.895457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.895469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.895678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.895689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.896008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.896019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.896330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.896342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.896669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.896681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.897050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.897061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.897385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.897398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.897725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.897736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.898078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.898089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.898423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.898434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.898763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.898775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.899095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.899106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.899299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.899311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.899595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.899606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.899924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.899935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.900263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.900274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.900584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.900595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.900931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.900943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.901290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.901303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.901630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.901641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.901993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.902005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.902338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.902350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.902600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.902611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.902925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.902937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.903246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.903259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.903597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.903610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.903920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.970 [2024-12-06 11:29:08.903932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.970 qpair failed and we were unable to recover it. 00:30:02.970 [2024-12-06 11:29:08.904257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.904267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.904573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.904584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.904873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.904885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.905071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.905082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.905358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.905370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.905672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.905683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.905869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.905882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.906244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.906257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.906565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.906576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.906897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.906910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.907214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.907225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.907530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.907541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.907846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.907857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.908163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.908175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.908511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.908522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.908827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.908838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.909147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.909158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.909465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.909477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.909591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.909603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.909928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.909940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.910147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.910157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.910353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.910365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.910592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.910603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.910911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.910923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.911235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.911246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.911425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.911436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.911732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.911744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.912031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.912042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.912352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.912364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.912672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.912684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.913031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.913043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.913351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.913364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.913678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.913689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.913902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.913915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.914218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.914229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.914537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.914549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.914710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.914722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.915011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.915024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.971 qpair failed and we were unable to recover it. 00:30:02.971 [2024-12-06 11:29:08.915324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.971 [2024-12-06 11:29:08.915336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.915651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.915663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.915964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.915976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.916285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.916297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.916585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.916596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.916908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.916921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.917244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.917255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.917556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.917568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.917903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.917915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.918228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.918239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.918581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.918592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.918763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.918774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.919075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.919087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.919388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.919399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.919599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.919610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.919929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.919941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.920250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.920262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.920568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.920580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.920886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.920897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.921217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.921228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.921532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.921544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.921853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.921869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.922181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.922196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.922497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.922508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.922845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.922857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.923197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.923208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.923519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.923531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.923726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.923737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.924023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.924036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.924342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.924353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.924689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.924701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.925186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.925198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.925508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.925519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.925840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.925852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.926035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.926047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.926368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.926380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.926720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.926732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.926961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.926973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.972 [2024-12-06 11:29:08.927322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.972 [2024-12-06 11:29:08.927334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.972 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.927680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.927692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.928031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.928044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.928367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.928379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.928708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.928721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.929030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.929041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.929384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.929395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.929701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.929711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.930015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.930030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.930363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.930375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.930708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.930721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.931027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.931039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.931372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.931384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.931588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.931599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.931887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.931898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.932232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.932242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.932558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.932569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.932885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.932896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.933210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.933221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.933531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.933543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.933855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.933870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.934202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.934214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.934542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.934554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.934908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.934920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.935101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.935113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.935278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.935289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.935580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.935592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.935929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.935940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.936270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.936281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.936582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.936594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.936929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.936941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.937276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.937288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.937601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.937614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.937927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.937939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.938256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.938267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.938597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.938608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.938904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.938915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.939227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.939239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.939511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-12-06 11:29:08.939523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-12-06 11:29:08.939853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.939869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.940203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.940216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.940436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.940447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.940620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.940631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.940946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.940958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.941278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.941289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.941603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.941614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.941818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.941829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.942165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.942177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.942524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.942536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.942858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.942874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.943190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.943201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.943479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.943490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.943808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.943822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.944150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.944161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.944467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.944480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.944808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.944819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.945139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.945151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.945481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.945493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.945802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.945814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.946146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.946159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.946341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.946353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.946680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.946692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.946997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.947009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.947183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.947195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.947504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.947515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.947839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.947851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.948159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.948171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.948421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.948433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.948733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.948745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.949043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.949055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.949375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.949387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.949681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.949692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.949876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-12-06 11:29:08.949888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-12-06 11:29:08.950174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.950187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.950489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.950501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.950853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.950869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.951163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.951175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.951476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.951488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.951791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.951802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.952019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.952032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.952361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.952372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.952682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.952694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.952904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.952916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.953195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.953206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.953421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.953433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.953761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.953772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.954088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.954100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.954303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.954314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.954568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.954580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.954798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.954813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.955189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.955201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.955501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.955513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.955830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.955841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.956021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.956032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.956323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.956334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.956617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.956629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.956790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.956802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.957145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.957158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.957469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.957481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.957662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.957675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.957975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.957986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.958281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.958293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.958575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.958586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.958855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.958869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.959173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.959185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.959489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.959500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.959805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.959816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.960151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.960163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.960474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.960485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.960849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.960866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.961175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.961187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-12-06 11:29:08.961517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-12-06 11:29:08.961529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.961734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.961745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.962058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.962070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.962391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.962402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.962645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.962655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.962948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.962960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.963311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.963323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.963638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.963650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.963830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.963841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.964161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.964172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.964359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.964371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.964678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.964688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.964986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.964998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.965225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.965237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.965524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.965536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.965899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.965911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.966196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.966207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.966516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.966529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.966840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.966851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.967166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.967178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.967481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.967492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.967881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.967893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.968203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.968214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.968623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.968634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.968936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.968947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.969265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.969276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.969579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.969592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.969926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.969938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.970266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.970277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.970559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.970570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.970879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.970891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.971041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.971053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.971213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.971224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.971506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.971518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.971701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.971711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.972033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.972062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.972239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.972265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.972587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.972610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-12-06 11:29:08.972926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-12-06 11:29:08.972940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.973265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.973277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.973474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.973486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.973821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.973834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.974210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.974223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.974528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.974541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.974877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.974890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.975214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.975227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.975564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.975575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.975912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.975924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.976254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.976265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.976625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.976637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.976836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.976848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.977173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.977186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.977385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.977396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.977762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.977773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.978103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.978115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.978438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.978450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.978621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.978632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.978942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.978955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.979199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.979210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.979414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.979425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.979750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.979762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.980115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.980127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.980468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.980480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.980670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.980684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.980993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.981006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.981309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.981321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.981596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.981607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.981945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.981956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.982285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.982295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.982611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.982623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.982824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.982837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.983081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.983093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.983382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.983393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.983585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.983597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.983907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.983919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.983997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.984007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.984305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-12-06 11:29:08.984316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-12-06 11:29:08.984642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.984653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.984937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.984948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.985292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.985304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.985733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.985744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.986048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.986061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.986367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.986379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.986711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.986724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.987048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.987061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.987389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.987401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.987717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.987729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.988059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.988071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.988251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.988262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.988475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.988486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.988807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.988820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.988917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.988928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.989218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.989229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.989559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.989571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.989905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.989919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.990233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.990244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.990588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.990599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.990887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.990899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.991142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.991155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.991478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.991490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.991674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.991687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.991929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.991941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.992273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.992284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.992483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.992494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.992775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.992786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.993139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.993151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.993536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.993547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.993854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.993870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.994050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.994062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.994388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.994398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.994745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.994757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.995066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-12-06 11:29:08.995077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-12-06 11:29:08.995389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.995401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.995581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.995594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.995917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.995929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.996265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.996276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.996483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.996494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.996819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.996830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.997035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.997046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.997384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.997395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.997705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.997717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.997908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.997921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.998208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.998218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.998542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.998554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.998878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.998890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.999103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.999114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.999423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.999435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:08.999745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:08.999756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.000094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.000107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.000440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.000451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.000741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.000753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.001095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.001107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.001421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.001433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.001771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.001782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.002117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.002130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.002457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.002469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.002774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.002786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.003086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.003097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.003401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.003413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.003721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.003733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.004049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.004060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.004253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.004264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.004592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.004603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.004814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.004826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.005050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.005061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.005358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.005370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.005696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.005707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.006036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.006047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.006382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.006394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.006727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-12-06 11:29:09.006738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-12-06 11:29:09.007089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.007101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.007428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.007440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.007764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.007775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.007966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.007977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.008342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.008354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.008661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.008673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.008907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.008918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.009249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.009262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.009590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.009603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.009889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.009901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.010238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.010249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.010511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.010523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.010872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.010884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.011181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.011203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.011535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.011546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.011879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.011891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.012219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.012230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.012391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.012403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.012709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.012720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.013046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.013058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.013370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.013382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.013607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.013619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.013944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.013956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.014293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.014303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.014597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.014608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.014783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.014793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.015111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.015122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.015289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.015300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.015461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.015473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.015645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.015656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.015897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.015909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.016240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.016252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.016563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.016574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.016885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.016897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.017290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.017301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.017599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.017611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.017935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.017947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.018268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.018279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-12-06 11:29:09.018579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-12-06 11:29:09.018591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.018775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.018786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.019049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.019060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.019394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.019405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.019714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.019726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.019911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.019924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.020123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.020134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.020480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.020492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.020803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.020814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.021005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.021016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.021305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.021316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.021648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.021659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.022001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.022014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.022200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.022212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.022546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.022557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.022746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.022757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.023085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.023096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.023429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.023440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.023768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.023779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.024080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.024091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.024420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.024432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.024714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.024725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.025044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.025055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.025360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.025371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.025689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.025702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.026036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.026050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.026356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.026367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.026676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.026688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.026881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.026893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.027260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.027271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.027405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.027415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.027688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.027700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.028042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.028053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.028360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.028372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.028670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.028682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.028994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.029005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.029351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.029362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.029647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.029658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-12-06 11:29:09.029963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-12-06 11:29:09.029974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.030296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.030307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.030618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.030629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.030942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.030955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.031264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.031276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.031618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.031630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.031950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.031963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.032295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.032308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.032520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.032532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.032842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.032854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.033157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.033169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.033507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.033519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.033820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.033832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.034144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.034156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.034472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.034484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.034832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.034845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.035156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.035168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.035540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.035552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.035860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.035876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.036204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.036215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.036516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.036528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.036827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.036839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.037143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.037154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.037439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.037450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.037629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.037640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.037930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.037942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.038261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.038272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.038610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.038623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.038951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.038963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.039296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.039307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.039636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.039648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.039993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.040005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.040314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.040326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.040664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.040675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.040981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.040993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.041328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.041340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.041641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.041651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.041981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.041993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-12-06 11:29:09.042265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-12-06 11:29:09.042276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.042611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.042622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.042805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.042816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.043015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.043026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.043346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.043357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.043690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.043702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.044031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.044043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.044357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.044368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.044505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.044516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.044834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.044844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.045152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.045164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.045490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.045501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.045811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.045824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.046159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.046171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.046487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.046499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.046829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.046840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.047151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.047165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.047498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.047509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.047813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.047824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.048109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.048120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.048450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.048462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.048790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.048800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.049100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.049112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.049446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.049458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.049757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.049768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.049987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.049999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.050283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.050294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.050523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.050534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.050840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.050851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.051133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.051145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.051356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.051368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.051691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.051703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.051922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.051934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.052247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.052258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.052571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-12-06 11:29:09.052582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-12-06 11:29:09.052970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.052981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.053277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.053287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.053485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.053496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.053826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.053838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.054144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.054156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.054464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.054477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.054779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.054791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.055157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.055168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.055475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.055488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.055793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.055804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.056009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.056020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.056185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.056196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.056527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.056539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.056841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.056852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.057234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.057245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.057555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.057566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.057860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.057877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.058077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.058088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.058399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.058410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.058697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.058708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.058967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.058979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.059302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.059314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.059615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.059627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.059933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.059945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.060269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.060280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.060576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.060588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.060885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.060896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.061227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.061238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.061538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.061550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.061879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.061891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.062197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.062208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.062497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.062509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.062824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.062837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.063147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.063159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.063387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.063397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.063589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.063601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.063917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.063928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.064223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-12-06 11:29:09.064234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-12-06 11:29:09.064510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.064521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.064818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.064830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.065167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.065179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.065486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.065498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.065869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.065880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.066151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.066162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.066468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.066479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.066789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.066801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.067110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.067122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.067433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.067445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.067738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.067749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.068061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.068073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.068350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.068362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.068690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.068702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.069034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.069045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.069348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.069361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.069710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.069721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.070033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.070044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.070376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.070387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.070700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.070712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.070907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.070920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.071204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.071215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.071515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.071526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.071841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.071853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.072188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.072199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.072537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.072549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.072879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.072891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.073108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.073118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.073415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.073428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.073760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.073772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.073949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.073960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.074285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.074296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.074570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.074581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.074985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.074997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.075337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.075348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.075647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.075658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.075846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.075857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.076156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.076167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-12-06 11:29:09.076494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-12-06 11:29:09.076508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.076837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.076849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.077185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.077197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.077496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.077508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.077692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.077706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.077989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.078000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.078225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.078235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.078615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.078626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.078957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.078969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.079297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.079309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.079598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.079610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.079816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.079827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.080135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.080146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.080475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.080486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.080778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.080791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.081100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.081111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.081414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.081426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.081719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.081730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.082003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.082014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.082344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.082355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.082660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.082672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.082989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.083002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.083193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.083204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.083503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.083515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.083844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.083856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.084199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.084210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.084545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.084557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.084762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.084778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.085059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.085070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.085470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.085483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.085826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.085838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.086026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.086038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.086311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.086323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.086635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.086647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.086937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.086949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.087253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.087264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.087449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.087461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.087734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.087745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.986 [2024-12-06 11:29:09.087930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.986 [2024-12-06 11:29:09.087943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.986 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.088177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.088188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.088384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.088394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.088672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.088683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.089025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.089037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.089339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.089350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.089524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.089535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.089865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.089877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.090211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.090222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.090522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.090533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.090932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.090944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.091138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.091148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.091446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.091457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.091768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.091779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.091998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.092010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.092330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.092342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.092672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.092683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.093015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.093028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.093360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.093371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.093674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.093686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.094024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.094035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.094362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.094374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.094737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.094749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.095075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.095095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.095456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.095467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.095719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.095730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.096032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.096043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.096339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.096351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.096652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.096663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.096856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.096870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.097053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.097064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.097340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.097351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.097646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.097658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.097821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.097833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.098155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.098167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.098492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.098503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.098835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.098846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.099179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.987 [2024-12-06 11:29:09.099190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.987 qpair failed and we were unable to recover it. 00:30:02.987 [2024-12-06 11:29:09.099508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.099519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.099675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.099686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.099901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.099913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.100128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.100140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.100466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.100478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.100774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.100785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.101085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.101096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.101410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.101421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.101738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.101750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.101927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.101939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.102355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.102366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.102667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.102679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.102982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.102993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.103306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.103318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.103367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.103379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.103671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.103681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.103992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.104004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.104337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.104348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.104648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.104658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.104997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.105011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.105338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.105349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.105651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.105662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.105973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.105985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.106323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.106335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.106649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.106662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.106846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.106858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.107169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.107181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.107511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.107523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.107736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.107747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.107960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.107972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.108305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.108316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.108656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.108668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.108999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.109010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.109342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.109354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.109680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.109692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.110026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.988 [2024-12-06 11:29:09.110038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.988 qpair failed and we were unable to recover it. 00:30:02.988 [2024-12-06 11:29:09.110350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.110361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.110654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.110665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.110976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.110987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.111286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.111297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.111609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.111620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.111980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.111991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.112328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.112340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.112672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.112683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.112993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.113004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.113185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.113196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Write completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Write completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Write completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Write completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Write completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Write completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Write completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Read completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Write completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Write completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Write completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 Write completed with error (sct=0, sc=8) 00:30:02.989 starting I/O failed 00:30:02.989 [2024-12-06 11:29:09.113958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.989 [2024-12-06 11:29:09.114379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.114410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.114738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.114750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.115181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.115210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.115532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.115543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.115859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.115874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.116185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.116214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.116557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.116568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.117075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.117105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.117438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.117449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.117757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.117766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.118088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.118097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.118410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.118418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.118756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.118765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.119067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.119076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.119382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.119391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.119757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.119765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.120077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.989 [2024-12-06 11:29:09.120085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.989 qpair failed and we were unable to recover it. 00:30:02.989 [2024-12-06 11:29:09.120435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.120443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-12-06 11:29:09.120628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.120637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-12-06 11:29:09.120964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.120973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-12-06 11:29:09.121339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.121349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-12-06 11:29:09.121659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.121668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-12-06 11:29:09.121998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.122007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-12-06 11:29:09.122259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.122267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-12-06 11:29:09.122589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.122598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-12-06 11:29:09.122915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.122924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-12-06 11:29:09.123208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.123216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-12-06 11:29:09.123415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.123425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:02.990 [2024-12-06 11:29:09.123720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.990 [2024-12-06 11:29:09.123729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:02.990 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.123887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.123898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.124118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.124128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.124409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.124417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.124723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.124731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.125118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.125130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.125434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.125443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.125715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.125724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.125871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.125879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.126147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.126156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.126471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.126479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.126796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.126804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.127208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.127216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.127525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.127535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.127856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.127868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.128176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.128184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.128230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.128237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.128541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.128549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.128866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.128875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.129162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.129170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.129471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.129480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.129782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.129790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.130090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.130098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.130420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.130429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.130758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.130766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.131078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.131086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.131398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.131406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.131728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.131737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.132033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.132043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.132349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.132357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.132675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.132683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.133011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.133021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.133324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-12-06 11:29:09.133333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-12-06 11:29:09.133642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.133651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.133973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.133982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.134290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.134298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.134611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.134619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.134939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.134949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.135248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.135256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.135589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.135597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.135917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.135926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.136237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.136246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.136529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.136537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.136838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.136846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.137002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.137010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.137213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.137223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.137514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.137522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.137813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.137823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.138134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.138144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.138450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.138459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.138824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.138833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.139167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.139177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.139486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.139494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.139671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.139680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.140009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.140018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.140322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.140329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.140639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.140647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.140957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.140965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.141271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.141280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.141566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.141574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.141884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.141892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.142198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.142207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.142504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.142513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.142804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.142814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.142963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.142973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.143260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.143268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.143435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.143443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.143638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.143646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.143817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.143826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.144001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.144009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-12-06 11:29:09.144324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-12-06 11:29:09.144332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.144623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.144632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.144948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.144957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.145138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.145146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.145463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.145472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.145839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.145847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.146147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.146157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.146484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.146492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.146675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.146683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.146979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.146988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.147170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.147178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.147457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.147465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.147819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.147827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.148160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.148169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.148528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.148536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.148837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.148845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.149146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.149155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.149444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.149452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.149752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.149760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.150036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.150044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.150314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.150322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.150534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.150543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.150852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.150864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.151252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.151262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.151571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.151580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.151875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.151884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.152202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.152210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.152529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.152537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.152730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.152738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.153059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.153068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.153380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.153390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.153696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.153704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.154022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.154030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.154209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.154217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.154537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.154545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.154874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.154882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.155174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.155182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.155512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.155521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-12-06 11:29:09.155825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-12-06 11:29:09.155834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.156119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.156127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.156433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.156441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.156731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.156740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.156937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.156947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.157271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.157279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.157474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.157481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.157804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.157812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.158114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.158122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.158472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.158482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.158793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.158803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.159100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.159109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.159422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.159431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.159783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.159792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.160102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.160112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.160284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.160293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.160555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.160564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.160873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.160883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.161154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.161162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.161473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.161482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.161791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.161799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.162115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.162125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.162293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.162301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.162594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.162602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.162913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.162921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.163226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.163234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.163542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.163550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.163843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.163852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.164130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.164139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.164374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.164382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.164700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.164708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.165084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.165093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.165405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.165414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.165731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.165739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.166034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.166042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.166373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.166382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.166696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.166704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.167016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.167025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-12-06 11:29:09.167349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-12-06 11:29:09.167357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.167647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.167655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.167971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.167980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.168288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.168297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.168615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.168623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.168964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.168973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.169237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.169247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.169405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.169412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.169734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.169742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.169915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.169923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.170099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.170108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.170374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.170382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.170693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.170701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.170996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.171006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.171194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.171205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.171501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.171510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.171815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.171825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.172131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.172140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.172449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.172458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.172763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.172773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.173127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.173137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.173435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.173445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.173759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.173769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.173935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.173945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.174247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.174256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.174620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.174629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.174935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.174944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.175264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.175274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.175583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.175592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.175925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.175935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.176295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.176305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.176621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.176630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.176807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-12-06 11:29:09.176816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-12-06 11:29:09.177111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.177122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.177443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.177452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.177764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.177773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.177979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.177989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.178200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.178210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.178519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.178529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.178740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.178749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.179082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.179091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.179382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.179391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.179699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.179708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.180018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.180028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.180354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.180363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.180691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.180700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.181006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.181017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.181336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.181344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.181652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.181662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.181985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.181994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.182166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.182176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.182409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.182419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.182604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.182614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.182925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.182933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.183167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.183175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.183393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.183401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.183586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.183595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.183789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.183797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.184121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.184130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.184438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.184446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.184756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.184765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.185059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.185068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.185383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.185392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.185760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.185768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.186085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.186093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.186427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.186436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.186731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.186739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.186955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.186964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.187162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.187170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.187514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.187523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.187831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-12-06 11:29:09.187841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-12-06 11:29:09.188147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.188157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.188199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.188207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.188354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.188363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.188559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.188569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.188878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.188888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.189203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.189212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.189527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.189537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.189835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.189843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.190041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.190049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.190371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.190380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.190551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.190559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.190850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.190860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.191189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.191198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.191394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.191402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.191585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.191594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.191779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.191790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.191989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.191997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.192335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.192344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.192696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.192704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.192977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.192986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.193321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.193331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.193623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.193632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.193932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.193940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.194242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.194251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.194557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.194567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.194873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.194882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.195191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.195201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.195482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.195491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.195763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.195771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.195929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.195939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.196253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.196263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.196593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.196601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.196914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.196924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.197261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.197270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.197579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.197589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.197928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.197937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.198277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.198287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-12-06 11:29:09.198592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-12-06 11:29:09.198600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.198900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.198909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.199200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.199210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.199514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.199523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.199822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.199830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.200131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.200141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.200437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.200446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.200753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.200763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.201083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.201092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.201408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.201417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.201791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.201801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.201990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.201999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.202315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.202324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.202657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.202666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.202966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.202975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.203167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.203175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.203450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.203459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.203767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.203776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.204077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.204089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.204394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.204402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.204717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.204725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.205029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.205037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.205219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.205227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.205504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.205512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.205553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.205561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.205821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.205830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.206144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.206152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.206443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.206451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.206776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.206784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.206975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.206984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.207300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.207308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.207616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.207624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.207808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.207817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.208107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.208116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.208292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.208300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.208648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.208658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.208966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.208975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-12-06 11:29:09.209285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-12-06 11:29:09.209294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.209603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.209612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.209939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.209948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.210266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.210274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.210569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.210578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.210920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.210929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.211236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.211244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.211525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.211533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.211685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.211694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.211996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.212005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.212186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.212194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.212511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.212519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.212716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.212724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.213030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.213039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.213341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.213350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.213663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.213671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.213845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.213853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.214163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.214173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.214520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.214529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.214881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.214891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.215196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.215205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.215505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.215515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.215792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.215801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.216109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.216118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.216323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.216331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.216654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.216662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.216972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.216981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.217255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.217263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.217576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.217586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.217914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.217923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.218075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.218083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.218389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.218398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.218715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.218724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.219031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.219040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.219331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.219341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.219646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.219655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.219928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.219936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.220253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.220261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-12-06 11:29:09.220626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-12-06 11:29:09.220636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.220936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.220944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.221254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.221262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.221576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.221584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.221932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.221941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.222249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.222257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.222566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.222575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.222882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.222892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.223077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.223086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.223265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.223274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.223574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.223583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.223891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.223900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.224084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.224093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.224428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.224437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.224633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.224642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.224800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.224810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.225091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.225101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.225425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.225434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.225742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.225752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.226028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.226037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.226347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.226356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.226647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.226657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.227036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.227046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.227384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.227395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.227709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.227717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.228017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.228025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.228187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.228195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.228521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.228529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.228685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.228695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.229035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.229044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.229352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.229362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.229670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.229679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.229986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.229995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.230332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-12-06 11:29:09.230340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-12-06 11:29:09.230658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.230667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.230854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.230871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.231195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.231203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.231506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.231515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.231823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.231833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.232041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.232049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.232241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.232249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.232558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.232566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.232765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.232773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.233089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.233098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.233411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.233420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.233678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.233686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.234001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.234010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.234285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.234294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.234617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.234625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.234936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.234950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.235123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.235131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.235461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.235469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.235640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.235648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.235969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.235978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.236288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.236297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.236508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.236517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.236691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.236701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.236997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.237007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.237292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.237301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.237628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.237637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.237968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.237978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.238290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.238299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.238611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.238620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.238924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.238939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.239115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.239123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.239427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.239436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.239743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.239752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.239914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.239923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.240268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.240276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.240563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.240572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.240615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.240623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.240900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-12-06 11:29:09.240909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-12-06 11:29:09.241231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.241239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.241532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.241540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.241851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.241860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.242171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.242180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.242341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.242350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.242634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.242644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.242828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.242836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.243136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.243144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.243458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.243466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.243798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.243806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.244100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.244109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.244416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.244423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.244775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.244784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.245105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.245115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.245424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.245432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.245746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.245755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.246032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.246042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.246214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.246224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.246533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.246543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.246928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.246937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.247235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.247244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.247546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.247555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.247868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.247876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.248218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.248229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.248537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.248546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.248908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.248917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.249221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.249230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.249546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.249555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.249869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.249879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.250225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.250234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.250412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.250420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.250742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.250752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.251083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.251092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.251426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.251434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.251651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.251659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.251850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.251859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.252073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.252082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-12-06 11:29:09.252380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-12-06 11:29:09.252389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.252740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.252748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.252900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.252908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.253236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.253244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.253572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.253582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.253895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.253904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.254212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.254220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.254545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.254554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.254883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.254892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.255203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.255211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.255523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.255532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.255848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.255856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.256169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.256177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.256388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.256396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.256724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.256733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.257036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.257045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.257348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.257356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.257531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.257539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.257872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.257881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.258206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.258214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.258402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.258410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.258576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.258584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.258915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.258923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.259244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.259253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.259436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.259445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.259752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.259760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.260086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.260094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.260402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.260411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.260700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.260708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.261018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.261027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.261220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.261228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.261537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.261546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.261860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.261872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.262042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.262050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.262342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.262354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.262538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.262548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.262857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.262869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.263204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-12-06 11:29:09.263213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-12-06 11:29:09.263525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.263534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.263848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.263857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.264040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.264049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.264339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.264348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.264585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.264595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.264910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.264920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.265188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.265198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.265508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.265517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.265830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.265839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.266124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.266133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.266426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.266436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.266798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.266806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.267092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.267101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.267412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.267421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.267752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.267761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.268037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.268046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.268228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.268237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.268578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.268587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.268771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.268780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.269049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.269059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.269231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.269239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.269547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.269556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.269838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.269847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.270134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.270144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.270453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.270462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.270643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.270651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.270977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.270987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.271293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.271302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.271612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.271621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.271929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.271939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.272121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.272128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.272442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.272451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.272759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.272769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.273065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.278 [2024-12-06 11:29:09.273074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.278 qpair failed and we were unable to recover it. 00:30:03.278 [2024-12-06 11:29:09.273360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.273368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.273680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.273688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.274000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.274010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.274166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.274174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.274487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.274496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.274700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.274708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.274991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.275000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.275169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.275177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.275479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.275486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.275639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.275648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.275810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.275817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.276106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.276115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.276443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.276451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.276760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.276768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.277078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.277086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.277400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.277408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.277644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.277652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.277965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.277973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.278283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.278292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.278599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.278608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.278934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.278943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.279252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.279261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.279599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.279607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.279793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.279801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.280097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.280106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.280416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.280424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.280745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.280754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.281054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.281062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.281220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.281229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.281559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.281568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.281879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.281888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.282143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.282152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.282481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.282490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.282804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.282812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.283124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.283133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.283439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.283447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.283774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.283782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.284081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.279 [2024-12-06 11:29:09.284089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.279 qpair failed and we were unable to recover it. 00:30:03.279 [2024-12-06 11:29:09.284395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.284403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.284713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.284721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.285039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.285048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.285232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.285241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.285560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.285570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.285760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.285769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.286101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.286110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.286421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.286429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.286720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.286727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.286897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.286905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.287198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.287206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.287385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.287394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.287638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.287646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.287973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.287981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.288289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.288297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.288417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.288424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.288686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.288694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.288891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.288899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.289167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.289175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.289505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.289513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.289821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.289830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.290138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.290147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.290461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.290470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.290766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.290775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.290942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.290950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.291129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.291138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.291514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.291522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.291820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.291829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.292129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.292137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.292448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.292456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.292770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.292778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.293093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.293102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.293376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.293385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.293704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.293712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.294061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.294069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.294242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.294250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.294558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.294566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.280 [2024-12-06 11:29:09.294875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.280 [2024-12-06 11:29:09.294884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.280 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.295080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.295087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.295382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.295392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.295699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.295708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.295872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.295880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.296226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.296234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.296572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.296580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.296904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.296913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.297226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.297235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.297545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.297553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.297846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.297855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.298159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.298167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.298338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.298346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.298653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.298661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.298995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.299004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.299214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.299223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.299546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.299555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.299866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.299875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.300177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.300185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.300457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.300465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.300777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.300785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.301091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.301099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.301389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.301398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.301751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.301760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.301926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.301934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.302214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.302222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.302520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.302528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.302839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.302847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.303157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.303166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.303472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.303480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.303811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.303819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.304132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.304140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.304446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.304455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.304763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.304771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.304932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.304942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.305282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.305290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.305634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.305643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.305944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.305953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.281 [2024-12-06 11:29:09.306271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.281 [2024-12-06 11:29:09.306280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.281 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.306585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.306593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.306905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.306914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.307215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.307223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.307582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.307591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.307932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.307941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.308117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.308124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.308428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.308437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.308743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.308751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.308918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.308926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.309199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.309208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.309515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.309523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.309708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.309716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.310024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.310033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.310349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.310358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.310543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.310551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.310840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.310848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.311135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.311144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.311455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.311462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.311771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.311779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.312096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.312104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.312410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.312419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.312602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.312610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.312924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.312932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.313101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.313109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.313375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.313384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.313614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.313622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.313937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.313947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.314276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.314285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.314592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.314601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.314911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.314927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.315123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.315131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.315483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.315492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.315805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.315814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.316119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.316128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.316436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.316444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.316730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.316740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.317034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.317043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.282 qpair failed and we were unable to recover it. 00:30:03.282 [2024-12-06 11:29:09.317366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.282 [2024-12-06 11:29:09.317374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.317682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.317691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.318025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.318034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.318345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.318355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.318631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.318639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.318831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.318839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.319140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.319149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.319458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.319466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.319774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.319782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.320096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.320104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.320404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.320413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.320727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.320735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.321084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.321094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.321279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.321287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.321602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.321611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.321917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.321925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.322248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.322257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.322553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.322563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.322853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.322863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.323173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.323183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.323383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.323392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.323623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.323631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.323959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.323967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.324293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.324302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.324460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.324468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.324847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.324856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.325046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.325055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.325377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.325384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.325695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.325704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.326022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.326031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.326351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.326360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.326688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.326696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.327011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.327020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.327220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.327229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.327391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.283 [2024-12-06 11:29:09.327399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.283 qpair failed and we were unable to recover it. 00:30:03.283 [2024-12-06 11:29:09.327674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.327683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.327851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.327860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.328214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.328222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.328558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.328567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.328774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.328781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.329060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.329068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.329387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.329397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.329555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.329563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.329839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.329847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.330158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.330166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.330471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.330480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.330778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.330786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.330856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.330869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.331154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.331162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.331473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.331482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.331782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.331790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.332179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.332187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.332564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.332572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.332881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.332889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.333186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.333195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.333508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.333517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.333821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.333829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.334051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.334060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.334381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.334389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.334686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.334695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.335005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.335013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.335353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.335362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.335668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.335677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.335989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.336005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.336315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.336324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.336626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.336635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.336930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.336940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.337265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.337273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.337584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.337592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.337785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.337793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.338082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.338090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.338451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.338460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.284 qpair failed and we were unable to recover it. 00:30:03.284 [2024-12-06 11:29:09.338765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.284 [2024-12-06 11:29:09.338774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.339077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.339085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.339395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.339403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.339716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.339724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.340043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.340051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.340438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.340446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.340748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.340758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.341042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.341051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.341371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.341380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.341569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.341577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.341893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.341901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.342238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.342246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.342554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.342562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.342798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.342806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.343098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.343106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.343406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.343414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.343721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.343729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.343915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.343925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.344255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.344263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.344416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.344424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.344790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.344798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.345095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.345103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.345413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.345421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.345703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.345711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.345904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.345913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.346130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.346139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.346458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.346466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.346796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.346804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.347175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.347184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.347485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.347495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.347801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.347809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.348028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.348036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.348345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.348354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.348667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.348675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.348882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.348890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.349069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.349076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.349343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.349351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.349668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.349677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.285 qpair failed and we were unable to recover it. 00:30:03.285 [2024-12-06 11:29:09.349981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.285 [2024-12-06 11:29:09.349989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.350299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.350308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.350605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.350613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.350922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.350931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.351240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.351248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.351554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.351563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.351754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.351762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.352046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.352055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.352382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.352392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.352682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.352690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.352998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.353007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.353326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.353336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.353645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.353654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.353931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.353940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.354263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.354272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.354568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.354576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.354774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.354782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.355384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.355403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.355713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.355723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.356030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.356038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.356353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.356361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.356661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.356670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.356995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.357004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.357295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.357304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.357657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.357665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.357996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.358005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.358328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.358337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.358636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.358645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.358952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.358961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.359308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.359317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.359636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.359644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.359981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.359990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.360348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.360356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.360573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.360582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.360771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.360779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.361083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.361092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.361273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.361282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.361611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.286 [2024-12-06 11:29:09.361619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.286 qpair failed and we were unable to recover it. 00:30:03.286 [2024-12-06 11:29:09.361964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.361972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.362148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.362157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.362358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.362366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.362563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.362572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.362902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.362912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.363241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.363250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.363558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.363566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.363856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.363868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.364201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.364210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.364516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.364525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.364693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.364703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.365026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.365035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.365221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.365230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.365556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.365563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.365894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.365903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.366206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.366214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.366514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.366522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.366799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.366807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.367115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.367123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.367449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.367456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.367763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.367771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.368098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.368107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.368400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.368409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.368727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.368735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.369032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.369040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.369321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.369329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.369632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.369640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.369971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.369979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.370296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.370304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.370612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.370619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.370932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.370941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.371258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.371266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.371591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.371599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.371923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.371932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.372252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.372260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.287 qpair failed and we were unable to recover it. 00:30:03.287 [2024-12-06 11:29:09.372610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.287 [2024-12-06 11:29:09.372618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.372960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.372969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.373320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.373328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.373635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.373643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.373825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.373834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.374106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.374114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.374422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.374430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.374744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.374752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.375030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.375039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.375355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.375363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.375671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.375679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.375871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.375879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.376074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.376082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.376381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.376389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.376698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.376706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.377006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.377017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.377318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.377327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.377650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.377658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.377959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.377968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.378228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.378236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.378540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.378548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.378855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.378865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.379165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.379173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.379336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.379345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.379649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.379657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.379975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.379983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.380162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.380172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.380453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.380461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.380749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.380757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.381085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.381094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.381401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.381409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.381757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.381765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.381936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.381944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.382162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.382171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.382364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.382373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.382686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.382695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.382995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.288 [2024-12-06 11:29:09.383004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.288 qpair failed and we were unable to recover it. 00:30:03.288 [2024-12-06 11:29:09.383323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.383331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.383638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.383646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.383960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.383968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.384268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.384276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.384574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.384582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.384891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.384900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.385285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.385293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.385583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.385591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.385905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.385914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.386255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.386263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.386441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.386450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.386799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.386806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.387122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.387131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.387445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.387453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.387763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.387771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.388084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.388092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.388409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.388417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.388716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.388724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.389030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.389040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.389339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.389348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.389667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.389676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.389984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.389993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.390299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.390307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.390597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.390605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.390915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.390923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.391324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.391333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.391639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.391647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.391958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.391966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.392274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.392282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.392594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.392603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.392910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.392918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.393235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.393243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.393553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.393562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.393879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.393887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.394209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.394217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.394512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.394520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.394679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.394687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.394990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.289 [2024-12-06 11:29:09.394998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.289 qpair failed and we were unable to recover it. 00:30:03.289 [2024-12-06 11:29:09.395298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.395306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.395607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.395616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.395914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.395921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.396131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.396140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.396409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.396417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.396726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.396734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.397032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.397040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.397357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.397365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.397676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.397684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.398014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.398023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.398336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.398345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.398655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.398664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.398978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.398987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.399290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.399298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.399650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.399659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.399934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.399942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.400113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.400121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.400450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.400459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.400769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.400776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.401091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.401099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.401408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.401418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.401689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.401697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.402005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.402015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.402332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.402341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.402556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.402564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.402775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.402784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.402963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.402972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.403271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.403279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.403585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.403593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.403885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.403894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.404215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.404224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.404537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.404545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.404790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.404798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.405111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.405120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.405430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.405440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.405629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.405638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.405948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.405956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.406127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.290 [2024-12-06 11:29:09.406135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.290 qpair failed and we were unable to recover it. 00:30:03.290 [2024-12-06 11:29:09.406433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.406441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.406627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.406635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.406966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.406975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.407283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.407291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.407638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.407646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.407947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.407955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.408168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.408175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.408452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.408460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.408775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.408782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.409098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.409107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.409420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.409428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.409718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.409726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.409921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.409929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.410194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.410202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.410511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.410519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.410851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.410859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.411141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.411150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.411465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.411473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.411649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.411657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.411989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.411998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.412318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.412326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.412623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.412631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.412827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.412839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.413126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.413134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.413485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.413494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.413843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.413853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.414172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.414181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.414480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.414488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.414801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.414810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.415122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.415130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.415435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.415442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.415729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.415738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.416022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.416030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.416340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.416347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.416700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.416708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.291 [2024-12-06 11:29:09.416998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.291 [2024-12-06 11:29:09.417006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.291 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.417185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.417193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.417422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.417432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.417730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.417738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.418069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.418077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.418386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.418394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.418576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.418585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.418855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.418866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.419197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.419205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.419553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.419562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.419844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.419852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.420151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.420159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.420493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.420501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.420831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.420839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.421240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.421250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.421551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.421559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.421880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.421888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.422189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.422197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.422506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.422514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.422830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.422839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.423181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.423189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.423490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.423498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.423803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.423812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.424125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.424134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.424426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.424435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.424742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.424750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.425034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.425043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.425325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.425337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.425489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.425497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.425809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.425817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.426122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.426130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.426435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.426444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.426754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.426762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.426954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.426963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.427291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.427300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.427605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.427613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.427903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.427911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.428235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.428244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.428556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.428564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.428873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.428881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.429159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.429167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.429476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.429484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.429803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.429812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.429985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.429995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.430329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.430339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.430638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-12-06 11:29:09.430646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-12-06 11:29:09.430961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.430969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.431125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.431134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.431441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.431449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.431774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.431783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.432069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.432077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.432409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.432417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.432572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.432581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.432929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.432937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.433250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.433259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.433437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.433447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.433754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.433762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.433958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.433967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.434284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.434293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.434599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.434607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.434898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.434906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.435271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.435279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.435586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.435594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.435906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.435914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.436252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.436260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.436560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.436568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.436876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.436885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.437210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.437220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.437528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.437536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.437841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.437849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.438026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.438035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.438366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.438374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.438711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.438720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.439032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.439041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.439345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.439353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.439658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.439666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.439973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.439981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.440291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.440300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.440609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.440617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.440971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.440980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.441291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.441299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.441608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.441617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.441938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.441946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.442148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.442156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.442489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.442498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.442804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.442813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.442855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.442866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.443033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.443042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.443323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.443332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.443661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.443669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.443967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.443975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.444286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.444295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.444618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.444626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.444925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.444934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.445259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.445267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.445574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.445583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.445897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.445906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.446101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.446109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.446431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.446439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.446747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.446755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.447075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.447083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.447372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.447380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.447690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.447698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.448048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.448056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.448365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.448373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.448668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.448677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.448985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.448995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.449286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.449295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.449601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-12-06 11:29:09.449610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-12-06 11:29:09.449906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.449921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.450225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.450234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.450539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.450547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.450904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.450913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.451219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.451227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.451539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.451549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.451743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.451752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.451921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.451930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.452266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.452274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.452582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.452591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.452896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.452904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.453096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.453106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.453407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.453416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.453726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.453735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.453925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.453934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.454264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.454271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.454565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.454573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.454879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.454888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.455192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.455200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.455510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.455519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.455695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.455704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.456010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.456018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.456196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.456205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.456376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.456385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.456669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.456677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.456985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.456996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.457307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.457316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.457502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.457511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.457669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.457678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.457848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.457858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.458166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.458176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.458482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.458490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.458690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.458700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.458870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.458880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.459158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.459167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.459512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.459520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.459711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.459720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.460027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.460036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.460380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.460388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.460701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.460709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.461018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.461027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.461379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.461388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.461688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.461696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.461870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.461878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.462150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.462160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.462466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.462475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.462784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.462793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.463180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.463189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.463517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.463525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.463832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.463840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.464129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.464137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.464325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.464334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.464627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.464635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.464951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.464960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.465269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.465277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.465583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.465592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.465885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.465894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.466238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.466246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.466549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.466557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.466868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.466877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-12-06 11:29:09.467193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-12-06 11:29:09.467201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.467509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.467516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.467823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.467831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.468149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.468157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.468458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.468466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.468788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.468798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.469112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.469121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.469306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.469314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.469609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.469617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.469924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.469932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.470296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.470304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.470604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.470612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.470904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.470912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.471239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.471247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.471434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.471443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.471608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.471616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.471961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.471969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.472119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.472127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.472386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.472395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.472707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.472715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.473017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.473026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.473334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.473343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.473538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.473547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.473731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.473738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.474034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.474043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.474372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.474380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.474689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.474698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.475012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.475020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.475347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.475355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.475551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.475560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.475871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.475879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.476188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.476197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.476491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.476500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.476814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.476822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.477126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.477135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.477434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.477442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.477732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.477740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.478034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.478042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.478370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.478378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.478743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.478751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.479037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.479045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.479225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.479241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.479524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.479532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.479842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.479850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.480179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.480187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.480501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.480511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.480812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.480820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.481138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.481146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.481323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.481332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.481612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.481621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.481784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.481792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.481976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.481984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.482338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.482347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.482653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.482660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.482978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.482986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.483294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.483302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.483594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.483602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-12-06 11:29:09.483912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-12-06 11:29:09.483922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.484230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.484239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.484546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.484554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.484877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.484885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.485186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.485194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.485501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.485508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.485824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.485832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.486137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.486145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.486459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.486467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.486785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.486794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.487006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.487015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.487338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.487346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.487694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.487702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.488016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.488023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.488325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.488333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.488630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.488638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.488950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.488959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.489275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.489283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.489596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.489604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.489899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.489907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.490165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.490174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.490482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.490491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.490676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.490685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.490982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.490990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.491308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.491316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.491633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.491641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.491988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.491996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.492304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.492312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.492594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.492604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.492914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.492922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.493226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.493236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.493402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.493409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.493749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.493757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.493989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.493997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.494319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.494327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.494617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.494625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.494935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.494943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.495263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.495272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.495579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.495587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.495885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.495894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.496101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.496109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.496432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.496440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.496749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.496756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.497057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.497065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.497374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.497382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.497688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.497696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.498011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.498019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.498323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.498331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.498648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.498657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.498969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.498978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.499152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.499161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.499214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.499223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.499422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.499431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.499739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.499747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.500032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.500040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.500361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.500369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.500654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.500661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.500973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.500981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.501294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.501303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.501577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.501585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.501894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.501903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.502223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.502232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.502539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.502547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.502873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.502881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.503199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-12-06 11:29:09.503207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-12-06 11:29:09.503514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.503522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.503831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.503839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.504133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.504142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.504429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.504439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.504745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.504753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.505073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.505082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.505244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.505252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.505427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.505436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.505771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.505778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.506007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.506015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.506348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.506357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.506665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.506673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.506881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.506890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.507150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.507158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.507468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.507476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.507819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.507828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.508107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.508115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.508422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.508431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.508737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.508745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.509051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.509059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.509364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.509372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.509702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.509710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.510020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.510029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.510362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.510370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.510684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.510693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.511003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.511012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.511328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.511336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.511629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.511638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.511948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.511956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.512266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.512274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.512582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.512590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.512748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.512757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.513043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.513052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.513381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.513389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.513738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.513746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.514100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.514108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.514319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.514327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.514630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.514638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.514942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.514950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.515292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.515301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.515616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.515624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.515928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.515936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.516112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.516121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.516433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.516442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.516798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.516807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.517139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.517147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.517465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.517473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.517787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.517795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.517981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.517990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.518277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.518285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.518590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.518598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.518903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.518911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.519221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.519229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.519407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.519416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.519750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.519759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.520077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.520086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.520395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.520403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.520710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.520718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.521060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-12-06 11:29:09.521068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-12-06 11:29:09.521370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.521379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.521686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.521693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.521968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.521977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.522290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.522299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.522592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.522601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.522909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.522918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.523235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.523243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.523396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.523404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.523653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.523661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.523959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.523967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.524284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.524292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.524603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.524611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.524939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.524947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.525135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.525143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.525348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.525356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.525555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.525565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.525852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.525860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.526127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.526136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.526444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.526452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.526762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.526770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.527080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.527088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.527282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.527291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.527593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.527600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.527934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.527942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.528112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.528123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.528341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.528349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.528620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.528628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.528936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.528945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.529103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.529110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.529416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.529424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.529740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.529748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.530036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.530044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.530401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.530410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.530736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.530745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.530943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.530952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.531266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.531275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.531581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.531588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.531912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.531920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.532241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.532248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.532554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.532563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.532872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.532881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.533226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.533234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.533535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.533544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.533859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.533874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.534031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.534039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.534284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.534293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.534468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.534477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.534757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.534766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.534944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.534952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.535124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.535132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.535531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.535539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.535850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.535858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.536172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.536180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.536489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.536497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.536790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.536798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.537163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.537172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.537489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.537496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.537668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.537677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.538017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.538026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.538204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.538213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.538493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.538501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.538816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.538823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.539177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.539185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-12-06 11:29:09.539491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-12-06 11:29:09.539499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.539807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.539817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.540122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.540131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.540428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.540436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.540653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.540662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.540873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.540882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.541206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.541213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.541505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.541513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.541685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.541693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.542023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.542031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.542245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.542253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.542542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.542550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.542871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.542879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.543195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.543202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.543370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.543379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.543719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.543727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.544033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.544041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.544345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.544353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.544538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.544547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.544854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.544865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.545060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.545068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.545383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.545391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.545698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.545706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.546006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.546014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.546322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.546331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.546646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.546653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.546968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.546977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.547269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.547277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.547595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.547604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.547926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.547934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.548143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.548152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.548470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.548479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.548651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.548660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.548868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.548876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.549196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.549204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.549546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.549554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.549869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.549878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.550077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.550086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.550401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.550410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.550594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.550603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.550914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.550922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.551267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.551278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.551576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.551584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.551874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.551882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.552210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.552218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.552524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.552532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.552836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.552844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.553165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.553174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.553494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.553503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.553820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.553828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.554180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.554188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.554523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.554531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.554847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.554855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.555214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.555223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.555536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.555544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.555882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.555890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.556214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.556223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.556417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.556425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.556791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.556800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.557096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.557105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.557413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.557421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.557730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.557738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.557949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.557958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.558287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.558295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-12-06 11:29:09.558602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-12-06 11:29:09.558610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.558928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.558936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.559253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.559262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.559553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.559561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.559874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.559883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.560189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.560197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.560482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.560490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.560853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.560863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.561164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.561172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.561480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.561489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.561780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.561788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.562092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.562100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.562409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.562417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.562724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.562733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.562892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.562901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.563205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.563213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.563520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.563528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.563838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.563848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.564009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.564019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.564198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.564206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.564393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.564401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.564702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.564710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.565034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.565042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.565349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.565357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.565653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.565661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.565971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.565979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.566285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.566293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.566602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.566610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.566920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.566928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.567238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.567247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.567601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.567609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.567906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.567914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.568223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.568231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.568537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.568545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.568699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.568707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.569009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.569018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.569326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.569333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.569717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.569726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.570036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.570045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.570360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.570369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.570720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.570727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.571030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.571039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.571366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.571375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.571699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.571707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.572021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.572029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.572333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.572342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.572648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.572657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.572932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.572941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.573264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.573272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.573583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.573591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.573910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.573918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.574087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.574094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.574452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.574459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.574765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.574773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.574969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.574978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.575259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.575267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.575574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.575583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.575891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.575901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.576210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.576218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.576510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-12-06 11:29:09.576518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-12-06 11:29:09.576834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.576842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.577124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.577134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.577438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.577446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.577757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.577765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.578081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.578089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.578401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.578409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.578724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.578732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.579070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.579078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.579416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.579425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.579594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.579603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.579895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.579904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.580266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.580274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.580522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.580530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.580868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.580876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.581188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.581196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.581498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.581506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.581821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.581829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.582127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.582137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.582443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.582452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.582747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.582756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.583038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.583046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.583369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.583377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.583682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.583689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.583985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.583994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.584173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.584182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.584359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.584367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.584663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.584672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.584970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.584978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.585312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.585320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.585670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.585678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.585845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.585854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.586157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.586165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.586470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.586478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.586860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.586871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.587153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.587161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.587449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.587457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.587766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.587775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.587959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.587969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.588300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.588308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.588640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.588649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.588832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.588840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.589106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.589116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.589433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.589441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.589736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.589745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.590029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.590037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.590356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.590364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.590672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.590680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.590978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.590986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.591300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.591307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.591614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.591623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.591918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.591926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.592266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.592274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.592583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.592591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.592903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.592911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.593239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.593247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.593561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.593569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.593933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.593941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.594246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.594254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.594602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.594611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.594788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.594796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.595063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.595071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.595376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.595384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.595696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.595704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.577 [2024-12-06 11:29:09.595997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.577 [2024-12-06 11:29:09.596005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.577 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.596324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.596332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.596657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.596664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.596998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.597006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.597349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.597357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.597655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.597663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.597856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.597876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.598187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.598196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.598504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.598511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.598818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.598826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.599133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.599142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.599456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.599464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.599794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.599802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.600118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.600126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.600314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.600325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.600631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.600639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.600968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.600976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.601283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.601290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.601597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.601605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.601920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.601929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.602234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.602242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.602516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.602524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.602865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.602873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.603155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.603163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.603457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.603466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.603764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.603773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.604086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.604094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.604403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.604411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.604720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.604729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.604893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.604900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.605247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.605255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.605566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.605574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.605907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.605916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.606070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.606077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.606380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.606388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.606544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.606552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.606857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.606867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.607168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.607176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.607484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.607492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.607772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.607780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.608084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.608092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.608250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.608262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.608562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.608571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.608850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.608857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.609170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.609178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.609519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.609528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.609837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.609846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.610152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.610161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.610450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.610458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.610753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.610762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.611091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.611100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.611271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.611278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.611617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.611625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.611933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.611941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.612161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.612169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.612474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.612483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.612807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.612815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.613204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.613213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.613519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.613528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.613919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.613927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.614219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.614227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.614395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.614403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.614723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.614732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.615027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.615035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.578 qpair failed and we were unable to recover it. 00:30:03.578 [2024-12-06 11:29:09.615352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.578 [2024-12-06 11:29:09.615360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.615635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.615643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.615948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.615956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.616231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.616239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.616526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.616535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.616843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.616851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.617134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.617143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.617460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.617468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.617686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.617693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.617912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.617920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.618213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.618221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.618527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.618534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.618845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.618853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.619209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.619218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.619522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.619531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.619836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.619845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.620031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.620039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.620421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.620430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.620739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.620747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.621035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.621044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.621374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.621383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.621693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.621700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.622008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.622016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.622336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.622344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.622633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.622641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.622982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.622990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.623165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.623173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.623536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.623545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.623878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.623886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.624197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.624205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.624514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.624522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.624833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.624842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.624998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.625008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.625212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.625220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.625511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.625519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.625823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.625832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.626135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.626144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.626451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.626460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.626766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.626775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.627131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.627140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.627428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.627436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.627766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.627774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.628074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.628083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.628243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.628252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.628440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.628448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.628715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.628723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.628886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.628894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.629066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.629074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.629355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.629364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.629680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.629688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.629998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.630007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.630310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.630319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.630613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.630621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.630968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.630977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.631277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.631285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.631597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.631605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.631908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.631916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.632232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.632242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.632553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.632561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.579 qpair failed and we were unable to recover it. 00:30:03.579 [2024-12-06 11:29:09.632871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.579 [2024-12-06 11:29:09.632880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.633048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.633057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.633344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.633352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.633663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.633671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.633905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.633913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.634176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.634183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.634493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.634501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.634810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.634818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.635102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.635110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.635401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.635409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.635716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.635725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.636031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.636040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.636344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.636352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.636647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.636656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.636936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.636944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.637269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.637277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.637587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.637594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.637929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.637937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.638249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.638257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.638420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.638427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.638607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.638615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.638913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.638922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.639228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.639236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.639546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.639554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.639682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.639688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.640037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.640045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.640205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.640213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.640480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.640488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.640797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.640806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.641102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.641110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.641253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.641261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.641579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.641587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.641895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.641904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.642230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.642237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.642623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.642631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.642938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.642946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.643224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.643232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.643526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.643534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.643841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.643851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.644125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.644134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.644321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.644329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.644611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.644620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.644932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.644940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.645271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.645278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.645584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.645592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.645903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.645911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.646250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.646258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.646569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.646577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.646886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.646895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.647198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.647206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.647512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.647520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.647826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.647834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.648124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.648133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.648452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.648460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.648808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.648816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.649119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.649127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.649438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.649445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.649754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.649762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.650039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.650047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.650365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.650374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.650723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.650731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.651063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.651072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.651428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.651436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.651616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.651624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.580 qpair failed and we were unable to recover it. 00:30:03.580 [2024-12-06 11:29:09.651804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.580 [2024-12-06 11:29:09.651812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.652024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.652032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.652350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.652358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.652657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.652665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.652975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.652983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.653293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.653300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.653609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.653617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.653919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.653928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.654234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.654241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.654533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.654541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.654855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.654865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.655144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.655152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.655470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.655478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.655775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.655783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.656110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.656119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.656427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.656435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.656628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.656637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.656996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.657004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.657323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.657332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.657643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.657652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.657845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.657854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.658032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.658040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.658363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.658371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.658697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.658705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.659012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.659020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.659313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.659321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.659499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.659506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.659831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.659839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.660143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.660151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.660447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.660455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.660776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.660785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.661093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.661101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.661408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.661416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.661714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.661721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.662040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.662048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.662094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.662100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.662398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.662406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.662722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.662730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.663033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.663041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.663367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.663375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.663690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.663699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.664017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.664026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.664334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.664343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.664656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.664664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.664963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.664971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.665160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.665169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.665463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.665470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.665646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.665653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.665970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.665978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.666305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.666312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.666619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.666626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.666805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.666812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.667093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.667101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.667150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.667157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.667349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.667359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.667648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.667656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.667948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.667956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.668271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.668279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.668609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.668617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.668937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.581 [2024-12-06 11:29:09.668945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.581 qpair failed and we were unable to recover it. 00:30:03.581 [2024-12-06 11:29:09.669252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.669260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.669574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.669582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.669885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.669894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.670207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.670215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.670522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.670529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.670837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.670845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.671137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.671145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.671455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.671463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.671633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.671643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.671847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.671854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.672047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.672055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.672371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.672379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.672755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.672763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.673087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.673096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.673399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.673407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.673599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.673607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.673898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.673906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.674237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.674245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.674558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.674566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.674867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.674875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.675175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.675183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.675491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.675499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.675826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.675834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.676143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.676151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.676471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.676479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.676785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.676794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.677100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.677109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.677307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.677314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.677495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.677503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.677829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.677837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.678166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.678175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.678516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.678524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.678823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.678831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.679133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.679141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.679294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.679302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.679619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.679627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.679804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.679812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.680164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.680172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.680378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.680386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.680714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.680722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.680879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.680887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.681171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.681180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.681482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.681490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.681802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.681809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.682113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.682121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.682310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.682318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.682669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.682677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.682956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.682964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.683279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.683287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.683595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.683604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.683897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.683906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.684075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.684082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.684394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.684402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.684622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.684630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.684968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.684976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.685285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.685293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.685596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.685605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.685908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.685917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.686224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.686232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.686540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.686548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.686704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.686712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.686988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.686997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.687321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.687329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.582 [2024-12-06 11:29:09.687644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.582 [2024-12-06 11:29:09.687652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.582 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.687986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.687994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.688270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.688278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.688562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.688570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.688878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.688886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.689191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.689199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.689412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.689420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.689713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.689721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.690028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.690037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.690316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.690325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.690649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.690658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.690958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.690968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.691119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.691127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.691393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.691400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.691713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.691721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.692013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.692022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.692191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.692200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.692589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.692597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.692904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.692913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.693209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.693217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.693522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.693530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.693840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.693848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.694166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.694174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.694509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.694517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.694790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.694798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.695108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.695118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.695424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.695432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.695616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.695624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.695933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.695941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.696256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.696263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.696567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.696575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.696817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.696825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.697107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.697115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.697292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.697302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.697522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.697530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.697831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.697838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.698177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.698186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.698362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.698370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.698695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.698703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.699030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.699038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.699354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.699362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.699664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.699671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.699848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.699856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.700177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.700185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.700501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.700509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.700783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.700792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.701109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.701117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.701394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.701402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.701706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.701715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.701993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.702001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.702273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.702281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.702587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.702596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.702901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.702910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.703082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.703091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.703380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.703388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.703724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.703731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.704029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.704037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.704346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.704354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.704556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.704564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.704876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.704885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.705184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.705192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.705489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.705496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.705811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.705820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.706124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.706133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.706172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.583 [2024-12-06 11:29:09.706179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.583 qpair failed and we were unable to recover it. 00:30:03.583 [2024-12-06 11:29:09.706438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.706446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.706755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.706762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.707034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.707042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.707362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.707371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.707677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.707684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.707842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.707851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.708067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.708075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.708378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.708386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.708662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.708670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.709014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.709023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.709296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.709304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.709633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.709641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.709951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.709960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.710280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.710289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.710603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.710611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.710888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.710896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.711220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.711228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.711430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.711438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.711651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.711660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.711929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.711938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.712258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.712265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.712581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.712589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.712883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.712891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.713201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.713209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.713503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.713511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.713816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.713825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.714120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.714130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.714422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.714430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.714599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.714608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.714924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.714933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.715248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.715256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.715597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.715605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.715918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.715926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.716247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.716255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.716561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.716569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.716866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.716875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.717213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.717221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.717536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.717543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.717849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.717858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.584 [2024-12-06 11:29:09.718140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.584 [2024-12-06 11:29:09.718149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.584 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.718462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.718472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.718790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.718800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.719176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.719184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.719498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.719506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.719809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.719816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.720122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.720131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.720439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.720448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.720729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.720737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.720920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.720929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.721269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.721277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.721585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.721593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.721920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.721928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.722186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.722194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.722516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.722524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.722822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.722831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.723024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.723033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.723344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.723352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.723662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.723671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.723984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.723992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.724303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.724311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.724630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.724638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.724807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.724816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.725127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.725135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.725423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.725431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.725750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.725758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.725972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.725980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.726293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.726302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.726600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.726607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.726919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.726928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.727229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.727237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.727531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.727540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.727870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-12-06 11:29:09.727878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-12-06 11:29:09.728147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.728156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.728459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.728467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.728817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.728825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.729025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.729033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.729343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.729351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.729656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.729663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.729972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.729980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.730305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.730313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.730624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.730632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.730947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.730956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.731264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.731273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.731433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.731442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.731721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.731729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.732030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.732038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.732357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.732365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.732699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.732706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.733085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.733093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.733401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.733409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.733710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.733718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.734007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.734016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.734346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.734355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.734666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.734675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.735002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.735010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.735188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.735197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.735384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.735392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.735699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.735707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.736008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.736016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.736318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.736326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.736487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.736496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.736770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.736777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.737105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.737113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.737444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.737452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.737757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.737765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.738091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.738099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.738405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.738422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.738701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.738710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.739062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.739071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.739367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.739375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.739674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.739682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.739995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.740004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.740315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.740324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.740631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.740640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.740960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.740968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.741278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.741286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.741595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.741604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.741923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.741930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.742246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.742254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.742570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.742578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.742890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.742899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.743204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.743213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.743517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.743525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.743848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.743856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.744162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.744170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.744479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.744488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.744801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.744810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.745101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.745108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.745407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.745416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-12-06 11:29:09.745722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-12-06 11:29:09.745729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.746037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.746045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.746350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.746358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.746662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.746670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.746979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.746987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.747283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.747291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.747570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.747578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.747887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.747895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.748227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.748235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.748508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.748516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.748723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.748731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.749043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.749051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.749392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.749400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.749710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.749718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.749873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.749882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.750041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.750049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.750394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.750402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.750751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.750762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.751072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.751081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.751433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.751441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.751746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.751754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.752077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.752085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.752380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.752388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.752697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.752705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.753011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.753019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.753339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.753347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.753652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.753661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.753931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.753940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.754266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.754274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.754581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.754589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.754878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.754886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.755239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.755247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.755556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.755564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.755868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.755876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.756059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.756067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.756373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.756381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.756688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.756696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.756847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.756856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.757153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.757161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.757467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.757475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.757778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.757787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.758094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.758103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.758393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.758401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.758710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.758718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.759040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.759051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.759383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.759391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.759692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.759701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.760009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.760018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.760334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.760342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.760645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.760653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.760940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.760948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.761255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.761263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.761570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.761578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.761886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.761895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.762209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.762218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.762521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.762529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.762828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.762836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.763139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.763147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.763319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.763328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.763666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.763674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.763981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.763989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-12-06 11:29:09.764304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-12-06 11:29:09.764312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.764654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.764662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.764987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.764996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.765325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.765333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.765527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.765535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.765850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.765858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.766160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.766169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.766476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.766484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.766794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.766802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.767104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.767112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.767304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.767313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.767627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.767635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.767962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.767971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.768309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.768316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.768614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.768622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.768932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.768940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.769102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.769112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.769412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.769420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.769727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.769735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.770026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.770035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.770354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.770363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.770666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.770674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.771012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.771021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.771331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.771343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.771652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.771660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.771986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.771994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.772191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.772199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.772503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.772511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.772782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.772790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.773117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.773125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.773304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.773314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.773595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.773603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.773908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.773916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.774263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.774271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.774439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.774448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.774657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.774664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.774939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.774947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.775278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.775286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.775594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.775601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.775795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.775802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.776069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.776078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.776423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.776431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.776717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.776725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.777035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.777043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.777350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.777359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.777656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.777663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.777926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.777935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.778103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.778112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.778447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.778456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.778749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.778757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.779028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.779035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.779352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.779360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.779537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.779546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.779868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.779876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.780167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-12-06 11:29:09.780175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-12-06 11:29:09.780358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.780367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.780676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.780684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.781008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.781017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.781363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.781371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.781676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.781684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.781996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.782004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.782339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.782347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.782641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.782649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.782952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.782963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.783266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.783273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.783599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.783608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.783949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.783957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.784260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.784269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.784577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.784586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.784914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.784921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.785120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.785128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.785443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.785452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.785761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.785769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.786046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.786054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.786228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.786236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.786565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.786572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.786877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.786885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.787161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.787169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.787510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.787518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.787827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.787836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.788135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.788144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.788483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.788490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.788805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.788813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.789121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.789130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.789437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.789446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.789609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.789618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.789898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.789907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.790230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.790239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.790541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.790549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.790890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.790899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.791218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.791226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.791545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.791554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.791870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.791879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.792035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.792043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.792222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.792231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.792547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.792555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.792734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.792743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.793077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.793085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.793308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.793316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.793697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.793705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.793995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.794003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.794319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.794327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.794636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.794644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.794947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.794957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.795301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.795310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.795653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.795662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.795988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.795996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.796288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.796296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.796494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.796502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.796887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.796895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.797064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.797074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.797225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.797233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.797515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.797524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.797820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.797828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.798151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.798159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.798474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-12-06 11:29:09.798482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-12-06 11:29:09.798640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.798649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.798992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.799000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.799312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.799320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.799656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.799664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.799951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.799960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.800268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.800277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.800432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.800440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.800781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.800790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.801165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.801174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.801467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.801476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.801632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.801641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.801919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.801927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.802107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.802117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.802421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.802430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.802739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.802748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.803074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.803083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.803364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.803372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.803680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.803688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.804001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.804009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.804328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.804337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.804638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.804646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.804946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.804954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.805238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.805246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.805442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.805450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.805595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.805603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.805880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.805889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.806219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.806227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.806567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.806578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.806905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.806914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.807212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.807220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.807529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.807538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.807858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.807878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.808227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.808235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.808521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.808530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.808828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.808837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.809123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.809132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.809424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.809433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.809739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.809748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.810052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.810061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.810371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.810380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.810556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.810565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.810854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.810867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.811147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.811156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.811301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.811310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.811616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.811625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.811976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.811985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.812301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.812309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.812615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.812624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.812913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.812922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.813316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.813325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.813631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.813640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.813953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.813962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.814271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.814280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.814578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.814586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.814897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.814906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.815223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.815232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.815420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.815429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.815699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.815708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.816017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.816026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.816345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.816353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.816530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-12-06 11:29:09.816539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-12-06 11:29:09.816713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.816722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.816871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.816880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.817147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.817156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.817383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.817392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.817687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.817697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.818009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.818018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.818208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.818218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.818538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.818546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.818873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.818881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.819184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.819192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.819388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.819397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.819705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.819713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.820038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.820046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.820229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.820237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.820560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.820568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.820853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.820864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.821158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.821166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.821478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.821486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.821821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.821830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.822120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.822129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.822294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.822303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.822487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.822495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.822682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.822690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.822981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.822989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.823304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.823312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.823630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.823638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.823948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.823957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.824280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.824288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.824573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.824581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.824886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.824895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.825211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.825220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.825514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.825522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.825812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.825820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.826131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.826140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.826449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.826457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.826781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.826789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.827142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.827150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.827321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.827330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.827654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.827662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.827933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.827941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.828101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.828110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.828414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.828422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.828762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.828770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.829057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.829065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.829252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.829260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.829529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.829538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.829889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.829901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.830199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.830208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.830509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.830517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.830831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.830839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.831049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.831058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.831373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.831381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.831669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.831677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-12-06 11:29:09.832029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-12-06 11:29:09.832038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.832342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.832350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.832639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.832647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.832937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.832946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.833241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.833249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.833443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.833452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.833698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.833706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.834029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.834037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.834235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.834243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.834568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.834576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.834915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.834923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.835233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.835241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.835547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.835555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.835955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.835964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.836153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.836163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.836338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.836347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.836711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.836719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.837068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.837077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.837403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.837411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.837718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.837726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.838035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.838044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.838358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.838366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.838697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.838705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.839013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.839021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.839324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.839332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.839488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.839496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.839828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.839837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.840163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.840171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.840512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.840520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.840775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.840782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.841068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.841077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.841404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.841412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.841720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.841727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.842029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.842040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.842333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.842341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.842639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.842647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.842972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.842980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.843288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.843296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.843603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.843611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.843935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.843943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.844275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.844283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.844589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.844597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.844888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.844898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.845198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.845205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.845516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.845524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.845829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.845837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.846159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.846167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.846456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.846465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.846772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.846780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.847094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.847102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.847396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.847403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.847723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.847732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.848045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.848054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.848331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.848340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.848559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.848568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.848882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.848890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.849203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.849211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.849517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.849525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.849703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.849712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-12-06 11:29:09.850030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-12-06 11:29:09.850038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.850354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.850363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.850540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.850548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.850866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.850875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.851063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.851071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.851378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.851386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.851696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.851704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.851848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.851856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.852132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.852140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.852463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.852471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.852791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.852799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.853109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.853117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.853448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.853456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.853774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.853782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.854057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.854067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.854379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.854388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.854691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.854700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.855042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.855050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.855225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.855234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.855561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.855569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.855900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.855908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.856219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.856227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.856540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.856548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.856853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.856865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.857171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.857179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.857473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.857481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.857788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.857796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.858104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.858113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.858396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.858404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.858670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.858678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.858986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.858994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.859266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.859274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.859563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.859570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.859878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.859887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.860072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.860080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.860379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.860386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.860602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.860610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.860894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.860902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.861117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.861126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.861374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.861381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.861688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.861696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.861999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.862007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.862349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.862357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.862665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.862674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.862866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.862875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.863125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.863135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.863318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.863325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.863624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.863632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.863918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.863926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.864227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.864235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.864487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.864495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.864698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.864706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.864904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.864912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.865183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.865191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.865497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.865508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.865815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.865822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.866139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.866147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.866473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-12-06 11:29:09.866481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-12-06 11:29:09.866775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.866783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.867054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.867062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.867359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.867367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.867678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.867685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.867883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.867892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.868216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.868225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.868506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.868514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.868834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.868842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.869140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.869149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.869437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.869445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.869657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.869665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.869978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.869986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.870310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.870318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.870624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.870633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.870932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.870941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.871243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.871252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.871553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.871560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.871872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.871880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.872156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.872164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.872474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.872482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.872781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.872789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.873096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.873104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.873401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.873409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.873705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.873714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.874032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.874041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.874371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.874379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.874516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.874524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.874837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.874845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.875163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.875172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.875462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.875470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.875745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.875753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.876050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.876058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.876366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.876374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.876666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.876673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.876995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.877004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.877318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.877327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.877510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.877522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.877736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.877744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.878029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.878037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.878343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.878351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.878618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.878626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.878952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.878961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.879278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.879287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.879467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.879475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.879763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.879771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.879959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.879968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.880273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.880282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.880586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.880593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.880899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.880907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.881086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.881095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.881381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.881390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.881686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-12-06 11:29:09.881694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-12-06 11:29:09.881905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.881913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.882246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.882253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.882564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.882572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.882884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.882892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.883202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.883210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.883534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.883542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.883837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.883845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.884066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.884075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.884357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.884366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.884648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.884657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.885002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.885010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.885319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.885328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.885489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.885498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.885836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.885844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.886203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.886211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.886526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.886534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.886841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.886849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.887166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.887174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.887499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.887507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.887826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.887835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.888151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.888160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.888450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.888459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.888749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.888758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.889065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.889073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.889380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.889391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.889678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.889686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.889886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.889894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.890149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.890157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.890450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.890458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.890746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.890754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.890962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.890970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.891347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.891356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.891661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.891668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.891973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.891981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.892309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.892317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.892622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.892630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.892937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.892945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.893286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.893293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.893612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.893621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.893903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.893911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.894228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.894235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.894523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.894532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.894819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.894827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.895128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.895137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.895446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.895454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.895744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.895752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.896046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.896054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.896338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.896346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.896659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.896667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.896974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.896981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.897300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.897308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.897614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.897623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.897929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.897937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.898255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.898263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.898594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.898602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.898905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.898914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.899182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.899189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.899485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.899493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.899780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.899788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.900106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.900114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-12-06 11:29:09.900410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-12-06 11:29:09.900419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.900710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.900719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.900892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.900900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.901129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.901137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.901490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.901500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.901783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.901790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.902176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.902185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.902493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.902501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.902809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.902817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.903115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.903123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.903415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.903423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.903730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.903738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.904030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.904039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.904381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.904389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.904712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.904720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.905027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.905035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.905416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.905424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.905708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.905716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.906022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.906030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.906347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.906354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.906682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.906690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.906996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.907004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.907169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.907179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.907501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.907509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.907816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.907824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.908130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.908139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.908331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.908339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.908637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.908645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.908956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.908964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.909171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.909179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.909506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.909514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.909832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.909842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.910125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.910134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.910425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.910433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.910733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.910742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.911032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.911040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.911365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.911373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.911667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.911674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.912035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.912043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.912199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.912208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.912513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.912521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.912674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.912682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.912998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.913006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.913388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.913396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.913562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.913571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.913894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.913903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.914061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.914068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.914394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.914402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.914707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.914715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.915046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.915054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.915389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.915397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.915702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.915710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.916028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.916036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.916310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.916319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.916644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.916652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.916960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.916969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.917274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.917282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.917563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.917571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-12-06 11:29:09.917870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-12-06 11:29:09.917879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.918148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.918156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.918462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.918470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.918800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.918808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.919028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.919036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.919224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.919232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.919599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.919606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.919886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.919894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.920081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.920089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.920419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.920427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.920734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.920742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.921053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.921061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.921358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.921366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.921674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.921684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.921882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.921890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.922193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.922201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.922506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.922514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.922817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.922826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.923139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.923147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.923474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.923481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.923778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.923786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.924088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.924097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.924402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.924410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.924707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.924715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.925063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.925072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.925418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.925426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.925746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.925754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.926087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.926095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.926383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.926391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.926701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.926708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.927009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.927017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.927326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.927334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.927664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.927672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.927981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.927989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.928161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.928170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.928487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.928495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.928783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.928792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.929145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.929153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.929473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.929480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.929806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.929814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.930111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.930119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.930430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.930437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.930616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.930625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.930925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.930933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.931111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.931120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.931435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.931443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.931750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.931757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.932072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.932080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.932367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.932375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.932567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.932575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.932881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.932890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.933191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.933199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.933488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.933496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.933803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-12-06 11:29:09.933812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-12-06 11:29:09.934162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.934170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.934499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.934506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.934846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.934854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.935167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.935176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.935481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.935489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.935798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.935806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.936100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.936109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.936418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.936426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.936732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.936740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.937046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.937055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.937378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.937386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.937652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.937659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.937958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.937966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.938285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.938292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.938570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.938578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.938782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.938790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.939105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.939113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.939398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.939406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.939708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.939716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.940030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.940039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.940348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.940356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.940645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.940653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.940946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.940954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.941261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.941270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.941575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.941584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.941882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.941891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.942230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.942239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.942593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.942601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.942900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.942909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.943228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.943236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.943578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.943585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.943892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.943900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.944298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.944306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.944634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.944642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.944965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.944974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.945282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.945290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.945600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.945608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.945937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.945945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.946151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.946159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.946419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.946429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.946732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.946740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.947038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.947046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.947318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.947326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.947631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.947639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.947945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.947954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.948264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.948273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.948630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.948639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.948938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.948946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.949261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.949269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.949422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.949430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.949612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.949620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.949920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.949928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.950214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.950222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.950509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.950517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.950856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.950867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.951164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.951173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.951478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.951487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.951817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.951826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-12-06 11:29:09.952124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-12-06 11:29:09.952132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.952441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.952448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.952801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.952809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.952875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.952882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.953273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.953281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.953605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.953614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.953771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.953780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.954084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.954092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.954418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.954426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.954715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.954723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.955041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.955049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.955371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.955380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.955553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.955562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.955905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.955913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.956129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.956137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.956397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.956406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.956692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.956700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.957016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.957024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.957335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.957343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.957654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.957662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.958006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.958014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.958336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.958347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.958651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.958658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.958930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.958938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.959255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.959264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.959556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.959564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.959871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.959879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.960054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.960062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.960389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.960397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.960686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.960694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.961006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.961014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.961317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.961326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.961651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.961659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.961949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.961956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.962260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.962268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.962473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.962481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.962787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.962795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.963087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.963096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.963405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.963413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.963721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.963729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.964026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.964035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.964310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.964317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.964664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.964672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.964984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.964992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.965317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.965325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.965501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.965510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.965877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.965885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.966191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.966198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.966531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.966539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.966866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.966874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.967176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.967184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.967490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.967498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.967784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.967792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.968087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.968095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.968402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.968411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.968757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.968765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.969053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.969062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.969351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.969359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.969542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.969551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-12-06 11:29:09.969853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-12-06 11:29:09.969861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.970163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.970171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.970511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.970520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.970831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.970838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.971081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.971090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.971416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.971424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.971724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.971732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.972033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.972041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.972324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.972333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.972630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.972638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.972963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.972971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.973282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.973290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.973592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.973600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.973894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.973902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.974196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.974204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.974512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.974519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.974698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.974706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.975028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.975036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.975339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.975347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.975644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.975652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.975835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.975844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.976123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.976131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.976420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.976428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.976736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.976744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.977033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.977041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.977354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.977362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.977664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.977672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.977998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.978006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.978315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.978324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.978630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.978639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.978944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.978952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.979279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.979287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.979599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.979607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.979898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.979907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.980234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.980243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.980547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.980555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.980861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.980873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.981156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.981165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.981348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.981357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.981621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.981630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.981803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.981810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.982102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.982110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.982444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.982454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.982789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.982796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.983078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.983086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.983391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.983399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.983681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.983690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.983997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.984007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.984346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.984354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.984651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.984660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.984952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.984960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.985167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.985175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.985490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.985497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-12-06 11:29:09.985789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-12-06 11:29:09.985797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.986100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.986108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.986414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.986422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.986727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.986735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.986920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.986929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.987247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.987255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.987558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.987566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.987741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.987749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.988030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.988038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.988326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.988334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.988608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.988616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.988938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.988947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.989237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.989245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.989419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.989428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.989725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.989733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.989884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.989892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.990189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.990197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.990406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.990415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.990720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.990728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.991036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.991045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.991342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.991350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.991643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.991650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.991808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.991815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.992005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.992014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.992304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.992312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.992616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.992624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.992927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.992935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.993233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.993242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.993531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.993539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.993827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.993836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.994146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.994155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.994462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.994469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.994800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.994809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.995106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.995114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.995422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.995430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.995582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.995590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.995876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.995884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.996237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.996245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.996543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.996551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.996848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.996856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.997168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.997177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.997473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.997482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.997692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.997700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.998026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.998034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.998324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.998332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.998669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.998677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.998984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.998992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.999177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.999185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.999475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.999483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.999764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:09.999772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:09.999991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:10.000000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:10.000333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:10.000341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:10.000670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:10.000679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:10.000988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:10.000996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:10.001360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:10.001368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:10.001708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:10.001716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:10.002033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:10.002042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:10.002769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:10.002783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:10.003170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-12-06 11:29:10.003179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-12-06 11:29:10.003522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.003530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.003633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.003641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.004064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.004073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.004408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.004416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.004644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.004653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.004985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.004993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.005303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.005310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.005659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.005667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.005991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.006000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.006353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.006361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.006768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.006778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.006991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.007000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.007276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.007284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.007643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.007652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.008002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.008010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.008195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.008204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.008403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.008411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.008578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.008587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.008993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.009001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.009344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.009352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.009589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.009597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.009937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.009946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.010286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.010294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.010587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.010594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.010906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.010915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.011237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.011245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.011549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.011558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.011896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.011905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.012324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.012332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.012704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.012712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.012938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.012948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.013229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.013237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.013345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.013351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.013569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.013577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.013813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.013820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.013918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.013926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.014037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.014045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.014134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.014141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.014326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.014335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.014563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.014571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.014938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.014949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.015260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.015268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.015472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.015480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.015813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.015821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.015992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.016000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.016303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.016311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.016531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.016539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.016856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.016867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:03.882 [2024-12-06 11:29:10.017163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.882 [2024-12-06 11:29:10.017170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:03.882 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.017479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.017489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.017811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.017822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.018026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.018035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.018361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.018368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.018670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.018677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.018994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.019003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.019268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.019277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.019611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.019619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.019932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.019940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.020271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.020280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.020570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.020579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.020816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.020825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.021143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.021151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.021469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.021477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.021807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.021815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-12-06 11:29:10.021997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-12-06 11:29:10.022007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.022300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.022308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.022628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.022636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.022929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.022937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.023258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.023266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.023576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.023584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.023739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.023747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.024033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.024042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.024349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.024357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.024549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.024558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.024874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.024882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.025163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.025170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.025207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.025214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.025496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.025504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.025748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.025757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.025952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.025960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.026301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.026309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.026525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.026533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.026806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.026814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.027122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.027130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.027420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.027428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.027772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.027779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.028090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.028099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.028447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.028455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.028786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.028794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.029101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.029109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.029220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.029229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.029475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.029484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.029822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.029831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.030162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.030172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.030498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.030506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.030811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.030819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.031096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.031104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.031279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.031288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.031596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-12-06 11:29:10.031605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-12-06 11:29:10.031914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.031922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.032259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.032267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.032588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.032596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.032934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.032942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.033260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.033268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.033445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.033453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.033788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.033796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.034106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.034115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.034410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.034418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.034746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.034755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.034988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.034996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.035201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.035209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.035358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.035365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.035621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.035629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.035967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.035975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.036158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.036167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.036458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.036466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.036631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.036640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.036945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.036954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.037138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.037146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.037400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.037408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.037724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.037732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.038016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.038024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.038373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.038381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.038681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.038689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.039083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.039091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.039387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.039395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.039585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.039593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.039909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.039918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.040229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.040237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.040425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.040433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.040759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.040768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.041078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.041087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.041454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.041462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.041653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.041661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.041721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.041731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-12-06 11:29:10.041952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-12-06 11:29:10.041960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.042025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.042032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.042229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.042238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.042412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.042421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.042655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.042664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.042986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.042995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.043100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.043108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.043300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.043308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.043477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.043483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.043672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.043680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.043768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.043775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.044011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.044019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.044279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.044289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.044621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.044629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.044951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.044959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.045174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.045183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.045520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.045529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.045912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.045920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.046142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.046150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.046480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.046488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.046668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.046677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.046977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.046985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.047169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.047178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.047480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.047489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.047699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.047707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.047921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.047930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.048199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.048207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.048434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.048443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.048749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.048757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.049038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.049046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.049328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.049336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.049532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.049540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.049874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.049882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.050085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.050094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.050249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.050258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.050569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-12-06 11:29:10.050579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-12-06 11:29:10.050870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.050878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.051180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.051189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.051380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.051388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.051580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.051589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.051915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.051924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.052117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.052124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.052431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.052439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.052769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.052778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.053152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.053161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.053462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.053470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.053778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.053786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.053983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.053993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.054294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.054302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.054615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.054623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.054929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.054937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.055245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.055252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.055444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.055453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.055759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.055767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.055928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.055937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.056229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.056237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.056561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.056570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.056938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.056946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.057335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.057343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.057651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.057660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.057993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.058002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.058232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.058240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.058542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.058550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.058854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.058865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.059192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.059200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.059508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.059516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.059702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.059711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.060031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.060040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.060362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.060370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.060682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.060690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.061002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.061011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.061332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.061341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.061646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.061654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.061969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-12-06 11:29:10.061978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-12-06 11:29:10.062312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.062320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.062627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.062635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.062984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.062993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.063174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.063183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.063475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.063483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.063667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.063675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.063965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.063974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.064358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.064366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.064683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.064691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.065002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.065010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.065344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.065352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.065665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.065674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.065992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.066000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.066353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.066363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.066666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.066675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.066914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.066922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.067255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.067263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.067586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.067594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.067892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.067900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.068084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.068092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.068243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.068252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.068539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.068548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.068879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.068887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.069178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.069186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.069502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.069512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.069845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.069854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.070166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.070174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.070472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.070480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.070660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.070669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.070857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.070868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.071210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.071218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.071599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.071607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.071909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.071917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-12-06 11:29:10.072228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-12-06 11:29:10.072235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.072492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.072500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.072804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.072812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.073140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.073149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.073463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.073471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.073767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.073776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.074098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.074107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.074399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.074407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.074715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.074723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.074915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.074923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.075232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.075240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.075546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.075554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.075864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.075873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.075981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.075988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.076279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.076287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.076599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.076607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.076917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.076926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.077251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.077259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.077544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.077552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.077856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.077867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.078160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.078169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.078364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.078372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.078567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.078576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.078783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.078792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.079077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.079085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.079375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.079383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.079575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.079583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.079873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.079881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.080207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.080215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.080559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.080567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.080877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.080886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.081103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.081111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.081432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.081440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.081782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.081790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.082089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.082097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.082398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.082407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.082725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.082733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-12-06 11:29:10.083035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-12-06 11:29:10.083043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.083373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.083381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.083688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.083697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.083893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.083902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.084198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.084207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.084499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.084507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.084814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.084822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.085132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.085141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.085440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.085449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.085742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.085751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.086036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.086044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.086191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.086199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.086542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.086551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.086708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.086716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.087011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.087020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.087335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.087344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.087419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.087427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.087696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.087704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.088017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.088025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.088335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.088343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.088639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.088647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.088955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.088964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.089284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.089293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.089592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.089601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.089805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.089813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.090140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.090148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.090223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.090230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.090528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.090536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.090911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.090919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.091212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.091221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.091532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.091540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.091847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.091855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.092014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.092022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.092331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.092339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.092536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.092545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.092871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-12-06 11:29:10.092879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-12-06 11:29:10.093150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.093159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.093463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.093471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.093784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.093795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.094124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.094132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.094340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.094348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.094635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.094643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.094835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.094843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.095179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.095187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.095522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.095530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.095786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.095794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.096101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.096110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.096384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.096392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.096704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.096713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.097080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.097088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.097390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.097398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.097724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.097732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.097975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.097983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.098279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.098287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.098597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.098605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.098920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.098928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.099237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.099246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.099547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.099555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.099759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.099767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.100166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.100174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.100463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.100472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.100751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.100759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.101073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.101082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.101494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.101502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.101830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.101838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.102143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.102152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.102428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.102437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.102561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.102571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.102780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.102789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-12-06 11:29:10.103104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-12-06 11:29:10.103112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.103426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.103434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.103737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.103745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.104102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.104110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.104308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.104316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.104624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.104632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.104810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.104819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.105107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.105115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.105443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.105451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.105737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.105748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.106061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.106070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.106378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.106386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.106716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.106724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.107113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.107121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.107314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.107321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.107499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.107507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.107824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.107832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.108126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.108134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.108440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.108448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.108754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.108763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.109084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.109093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.109396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.109404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.109704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.109711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.110014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.110023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.110178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.110185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.110454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.110462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.110772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.110780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.111077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.111085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.111386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.111394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.111692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.111701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.112001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.112010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.112339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.112348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.112667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.112675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.112985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.112993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.113290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.113298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.113629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.113637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.113931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.113940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-12-06 11:29:10.114265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-12-06 11:29:10.114273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.114656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.114664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.114842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.114850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.115158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.115166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.115488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.115496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.115806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.115815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.116088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.116098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.116389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.116398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.116578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.116586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.116805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.116813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.117094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.117102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.117394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.117402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.117720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.117730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.118032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.118040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.118428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.118436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.118736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.118745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.119080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.119088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.119404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.119412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.119585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.119593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.119788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.119796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.119972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.119981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.120291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.120299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.120571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.120579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.120892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.120900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.121104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.121111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.121272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.121280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.121573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.121581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.121871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.121879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.122192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.122200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-12-06 11:29:10.122400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-12-06 11:29:10.122408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.122564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.122573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.122873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.122882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.123108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.123116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.123416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.123424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.123727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.123735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.123927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.123936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.124100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.124108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.124366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.124374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.124681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.124689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.125023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.125032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.125351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.125359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.125666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.125674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.125873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.125884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.126082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.126091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.126376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.126384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.126551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.126559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.126851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.126859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.127149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.127157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.127462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.127471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.127780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.127788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.127940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.127948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.128074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.128081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.128361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.128370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.128664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.128672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.128998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.129007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.129314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.129322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.129637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.129646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.129957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.129965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.130285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.130293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.130627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.130635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.130973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.130981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.131307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.131316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.131622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.131629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.131936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.131945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.132261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.132270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-12-06 11:29:10.132307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-12-06 11:29:10.132316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.132587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.132595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.132892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.132901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.133192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.133200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.133499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.133507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.133810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.133818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.134129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.134137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.134411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.134419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.134757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.134765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.134962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.134970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.135153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.135161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.135498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.135506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.135832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.135840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.135996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.136006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.136165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.136173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.136504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.136512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.136854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.136869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.137178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.137186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.137497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.137505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.137804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.137813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.138122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.138130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.138314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.138322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.138598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.138606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.138897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.138906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.139234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.139242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.139587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.139595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.139936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.139944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.140243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.140253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.140581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.140589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.140852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.140860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.141173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.141182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.141371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.141378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.141682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.141691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.141854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.141866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.142209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.142217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-12-06 11:29:10.142515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-12-06 11:29:10.142523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.142855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.142866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.143154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.143162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.143473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.143481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.143777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.143785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.144073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.144081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.144392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.144400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.144696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.144704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.145035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.145044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.145375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.145383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.145701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.145709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.146023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.146031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.146270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.146279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.146602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.146609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.146762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.146770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.147041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.147048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.147345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.147353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.147500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.147508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.147796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.147804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.148153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.148162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.148470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.148478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.148786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.148794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.148987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.148996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.149298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.149306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.149615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.149623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.149931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.149939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.150276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.150283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.150590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.150598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.150913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.150921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.151208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.151216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.151562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.151569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.151874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.151883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.152173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.152184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.152488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.152497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-12-06 11:29:10.152789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-12-06 11:29:10.152797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.153094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.153102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.153430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.153438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.153745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.153753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.154051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.154059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.154366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.154374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.154696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.154704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.155043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.155051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.155339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.155348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.155654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.155662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.155970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.155979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.156272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.156280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.156585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.156592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.156914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.156922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.157266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.157274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.157459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.157468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.157769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.157777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.158082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.158090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.158217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.158224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.158494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.158503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.158809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.158817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.159014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.159022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.159352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.159360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.159684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.159693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.159973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.159981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.160144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.160152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.160435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.160443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.160736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.160744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.160910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.160919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.161201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.161209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.161528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.161537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.161873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.161881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.162168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.162175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.162392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.162401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.162690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.162698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.162996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-12-06 11:29:10.163004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-12-06 11:29:10.163302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.163311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.163625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.163634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3628542 Killed "${NVMF_APP[@]}" "$@" 00:30:04.171 [2024-12-06 11:29:10.163947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.163957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.164269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.164277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.164584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.164593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:04.171 [2024-12-06 11:29:10.164897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.164906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:04.171 [2024-12-06 11:29:10.165224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.165234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:04.171 [2024-12-06 11:29:10.165546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.165556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:04.171 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.171 [2024-12-06 11:29:10.165854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.165867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.166169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.166178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.166486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.166494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.166800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.166808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.167104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.167113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.167401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.167409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.167696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.167704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.167869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.167878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.168165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.168173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.168459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.168467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.168764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.168773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.169053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.169062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.169360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.169368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.169659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.169668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.169939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.169947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.170251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.170260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.170566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.170574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.170903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.170913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.171208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.171219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.171517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.171526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.171860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.171875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.172209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.172218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.172524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.172532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-12-06 11:29:10.172869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-12-06 11:29:10.172878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3629575 00:30:04.172 [2024-12-06 11:29:10.173185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.173196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3629575 00:30:04.172 [2024-12-06 11:29:10.173498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.173508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:04.172 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3629575 ']' 00:30:04.172 [2024-12-06 11:29:10.173832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.173842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.172 [2024-12-06 11:29:10.174121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.174132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.172 [2024-12-06 11:29:10.174315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.174326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.172 [2024-12-06 11:29:10.174627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.174637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.172 [2024-12-06 11:29:10.174791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.174802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 11:29:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.172 [2024-12-06 11:29:10.175088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.175099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.175308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.175317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.175621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.175630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.175809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.175818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.176105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.176114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.176400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.176409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.176690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.176698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.176999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.177008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.177320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.177329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.177623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.177638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.177940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.177950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.178269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.178278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.178588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.178597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.178924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.178934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.179255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.179264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.179574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.179583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.179970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.179979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.180310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.180320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.180648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.180658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.180912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.180921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.181138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.181147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.181462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.181472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.181762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.181773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.182049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.182058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.182372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.182382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-12-06 11:29:10.182690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-12-06 11:29:10.182698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.183007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.183016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.183398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.183407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.183708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.183717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.184024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.184033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.184321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.184329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.184642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.184651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.184929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.184938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.185271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.185281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.185675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.185683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.185977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.185986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.186296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.186304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.186630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.186639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.186844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.186852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.187154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.187164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.187440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.187448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.187792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.187800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.188084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.188092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.188393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.188401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.188716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.188725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.189017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.189026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.189324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.189333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.189523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.189532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.189796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.189805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.190176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.190186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.190314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.190322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.190545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.190553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.190826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.190835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.191111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.191119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.191411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.191420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.191724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.191732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.192044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.192052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.192423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.192433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.192760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.192768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.193067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.193076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.193377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.193385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.193579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.193586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.193899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.193908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-12-06 11:29:10.194218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-12-06 11:29:10.194227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.194555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.194564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.194839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.194848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.195011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.195020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.195306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.195315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.195513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.195521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.195830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.195839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.196141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.196150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.196439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.196448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.196773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.196782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.197072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.197080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.197371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.197380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.197571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.197579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.197868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.197877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.198241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.198250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.198534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.198543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.198893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.198901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.199217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.199225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.199508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.199517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.199670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.199678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.199978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.199986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.200297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.200305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.200504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.200511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.200685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.200694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.200986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.200994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.201316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.201324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.201616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.201626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.201956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.201964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.202286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.202294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.202599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.202607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.202937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.202946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.203288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.203297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.203609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.203617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.203928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.203936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.204216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.204224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.204529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.204537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.204842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.204850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.205153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-12-06 11:29:10.205161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-12-06 11:29:10.205471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.205480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.205770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.205778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.206086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.206094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.206406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.206415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.206599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.206608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.206915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.206924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.207105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.207113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.207496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.207505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.207790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.207798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.208079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.208088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.208399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.208408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.208704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.208712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.209085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.209093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.209400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.209408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.209711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.209720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.210035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.210044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.210353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.210361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.210651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.210660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.210966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.210975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.211302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.211310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.211645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.211654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.211824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.211833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.212162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.212171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.212494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.212502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.212833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.212841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.213040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.213048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.213357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.213365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.213665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.213673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.213979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.213988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.214304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.214312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.214493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.214502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.214840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-12-06 11:29:10.214849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-12-06 11:29:10.215182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.215190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.215497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.215506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.215806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.215815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.216162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.216171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.216466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.216474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.216784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.216792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.217091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.217100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.217405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.217413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.217706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.217715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.218007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.218015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.218321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.218330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.218635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.218644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.218947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.218956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.219235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.219243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.219560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.219568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.219878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.219886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.220186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.220194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.220483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.220492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.220798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.220806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.221074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.221082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.221411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.221420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.221711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.221720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.222032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.222040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.222287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.222295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.222609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.222617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.222797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.222806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.223110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.223118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.223424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.223432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.223799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.223807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.224104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.224113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.224423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.224431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.224760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.224768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.224950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.224958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.225287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.225295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.225601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.225609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.225916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.225925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.226203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.226213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-12-06 11:29:10.226501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-12-06 11:29:10.226510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.226791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.226800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.227172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.227180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.227476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.227484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.227806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.227814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.228181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.228190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.228499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.228507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.228801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.228809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.229102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.229110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.229404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.229413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.229721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.229730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.230032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.230040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.230347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.230355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.230664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.230672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.231029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.231037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.231375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.231383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.231572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.231580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.231765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.231774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.232075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.232084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.232386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.232395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.232631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.232640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.232948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.232958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.233272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.233281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.233612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.233622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.233917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.233928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.234124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.234133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.234439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.234447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.234778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.234786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.234887] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:30:04.177 [2024-12-06 11:29:10.234962] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.177 [2024-12-06 11:29:10.235080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.235090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.235397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.235404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.235586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.235593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.235905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.235914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.236293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.236301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.236609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.236618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.236944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.236953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.237269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.237279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.237588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-12-06 11:29:10.237597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-12-06 11:29:10.237904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.237913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.238215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.238226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.238531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.238541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.238845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.238854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.239196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.239206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.239371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.239381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.239664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.239673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.239961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.239970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.240277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.240286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.240658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.240667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.240968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.240977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.241318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.241327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.241630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.241640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.241843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.241852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.242166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.242176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.242468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.242477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.242800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.242810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.243119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.243129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.243415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.243424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.243729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.243738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.243919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.243928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.244242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.244250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.244550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.244557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.244855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.244867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.245183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.245192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.245501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.245510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.245706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.245716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.246031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.246040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.246357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.246366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.246645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.246654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.246952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.246960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.247284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.247292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.247473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.247482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.247765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.247773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.248088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.248096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.248398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.248407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.248747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-12-06 11:29:10.248755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-12-06 11:29:10.249029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.249037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.249353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.249361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.249667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.249675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.249989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.249997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.250037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.250046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.250315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.250324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.250613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.250622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.250937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.250946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.251293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.251302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.251609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.251617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.251909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.251918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.252235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.252243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.252549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.252557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.252826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.252834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.253156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.253165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.253470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.253479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.253785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.253794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.254160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.254168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.254496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.254504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.254872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.254881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.255190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.255199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.255548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.255556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.255846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.255855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.256153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.256161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.256320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.256328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.256491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.256500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.256802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.256811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.257184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.257193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.257483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.257492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.257814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.257821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.258131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.258140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.258469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.258477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.258681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.258689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.259017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.259025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.259320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.259328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.259615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.259624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.259934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.259943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.260107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-12-06 11:29:10.260116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-12-06 11:29:10.260411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.260420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.260590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.260598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.260906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.260914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.261127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.261135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.261467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.261475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.261790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.261798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.262106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.262116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.262398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.262407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.262734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.262743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.263065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.263074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.263327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.263335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.263558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.263567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.263872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.263881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.264166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.264174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.264500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.264509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.264836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.264844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.265152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.265160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.265437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.265445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.265764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.265772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.265958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.265968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.266300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.266308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.266613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.266621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.266926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.266935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.267243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.267251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.267573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.267582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.267881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.267890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.268212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.268220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.268527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.268535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.268866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.268875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.269169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.269178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.269483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.269491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.269776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.269784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.270085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.270094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.270381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.270389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-12-06 11:29:10.270694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-12-06 11:29:10.270703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.270865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.270874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.271220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.271230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.271531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.271540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.271886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.271894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.272180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.272188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.272496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.272504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.272810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.272819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.273140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.273148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.273453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.273461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.273776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.273784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.274087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.274096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.274401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.274413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.274727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.274736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.275018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.275026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.275345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.275354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.275552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.275561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.275927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.275936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.276277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.276286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.276490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.276498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.276823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.276832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.277009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.277019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.277346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.277354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.277650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.277658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.277967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.277975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.278177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.278185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.278402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.278411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.278710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.278718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.278918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.278927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.279277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.279285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.279585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.279593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.279883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.279892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.280222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.280230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.280569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.280577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.280874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.280883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.281190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.281199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.281384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.281394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.281704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-12-06 11:29:10.281713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-12-06 11:29:10.282041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.282050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.282413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.282421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.282721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.282730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.282949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.282957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.283136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.283144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.283438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.283446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.283753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.283761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.284128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.284137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.284426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.284435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.284765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.284773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.285081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.285089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.285396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.285404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.285581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.285590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.285906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.285914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.286203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.286213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.286530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.286539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.286817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.286825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.287182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.287191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.287495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.287503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.287795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.287804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.288098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.288108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.288292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.288302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.288606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.288614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.288918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.288926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.289260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.289268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.289438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.289448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.289736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.289745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.290045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.290053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.290365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.290373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.290663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.290671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.290987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.290996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.291216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.291225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.291553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.291562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.291900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.291908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.292212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.292220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.292526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.292534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.292821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.292829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-12-06 11:29:10.293029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-12-06 11:29:10.293038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.293335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.293343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.293643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.293651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.293946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.293955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.294261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.294269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.294571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.294579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.294785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.294793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.295098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.295107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.295275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.295283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.295601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.295609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.295929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.295938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.296278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.296288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.296561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.296569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.296899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.296908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.297081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.297090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.297388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.297396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.297726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.297734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.298037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.298047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.298358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.298367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.298660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.298669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.298957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.298965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.299278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.299287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.299596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.299605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.299894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.299904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.300207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.300214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.300530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.300538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.300857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.300872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.301047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.301057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.301342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.301350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.301660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.301669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.302009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.302018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.302305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.302313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.302634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.302643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.302948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.302957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.303145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.303154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.303528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.303536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.303875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.303884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.304185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-12-06 11:29:10.304193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-12-06 11:29:10.304506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.304514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.304700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.304710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.305002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.305011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.305338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.305346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.305656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.305664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.305972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.305981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.306318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.306327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.306636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.306644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.306846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.306855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.307161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.307170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.307474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.307482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.307662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.307671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.307971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.307979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.308285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.308293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-12-06 11:29:10.308589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-12-06 11:29:10.308597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.308937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.308946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.309111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.309119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.309425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.309433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.309781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.309789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.310093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.310104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.310275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.310283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.310614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.310623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.310959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.310967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.311297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.311306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.311635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.311644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.311808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.311817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.312028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.312037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.312231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.312240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.312552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.312561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.312753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.312761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.313076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.313084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.313390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.313398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.313697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.313705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.314014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.314023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.314197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.314206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.314520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.314528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.314834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.314843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.315015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.315025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.315325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.315334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.315641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.315649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.315956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.315964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.316343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.316351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.316677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.316686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.316872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.316882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.317176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.317185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.317522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.317530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.317720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.317729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.317906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.317916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.318284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.318292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.318576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.318584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.318923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.318932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.319236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.319245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.319438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.319446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.319737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.319746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.320055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.320065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-12-06 11:29:10.320268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-12-06 11:29:10.320277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.320441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.320449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.320832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.320840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.321149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.321157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.321361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.321371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.321688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.321697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.322016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.322024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.322190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.322199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.322467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.322475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.322687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.322696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.323026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.323034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.323374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.323382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.323696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.323705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.324024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.324033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.324339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.324348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.324647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.324656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.324901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.324910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.325240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.325249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.325543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.325551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.325859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.325872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.326116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.326125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.326435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.326443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.326629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.326638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.326935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.326944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.326984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.326991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.327256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.327265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.327573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.327582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.327907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.327916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.328100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.328108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.328463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.328470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.328874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.328883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.329165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.329173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.329474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.329482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.329795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.329802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.330123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.330132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.330458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.330466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.330770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.330778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-12-06 11:29:10.331080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-12-06 11:29:10.331089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.331409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.331418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.331731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.331739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.332032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.332041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.332364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.332371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.332528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.332536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.332839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.332846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.333135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.333144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.333464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.333472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.333796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.333804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.334138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.334146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.334456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.334465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.334616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.334625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.334971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.334980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.335161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.335170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.335462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.335471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.335647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.335655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.335835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.335844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.336073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.336084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.336377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.336385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.336694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.336702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.337012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.337021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.337335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.337344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.337673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.337681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.337981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.337991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.338311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.338320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.338611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.338621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.338931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.338940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.339260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.339268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.339573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.339581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.339727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.339734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.340036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.340044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.340343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.340351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.340559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:04.469 [2024-12-06 11:29:10.340656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.340663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.340872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.340880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.341198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.341206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.341531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.341538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.341854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-12-06 11:29:10.341865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-12-06 11:29:10.342178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.342186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.342572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.342580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.342845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.342854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.343156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.343165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.343364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.343373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.343676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.343684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.343999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.344008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.344336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.344344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.344655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.344664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.344975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.344987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.345337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.345345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.345642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.345651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.345854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.345865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.346146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.346155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.346502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.346511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.346826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.346835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.347142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.347152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.347440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.347449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.347746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.347755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.348032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.348041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.348373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.348382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.348572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.348582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.348783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.348792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.348995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.349004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.349186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.349195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.349466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.349475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.349791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.349799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.350110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.350118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.350412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.350420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.350718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.350726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.350877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.350886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.351080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.351088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.351252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.351259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.351575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.351583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.351898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.351907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.352107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.352115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.352442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.352451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.352763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-12-06 11:29:10.352772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-12-06 11:29:10.353084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.353093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.353259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.353267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.353620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.353628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.354033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.354042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.354391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.354399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.354724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.354732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.355046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.355055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.355349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.355357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.355542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.355551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.355868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.355877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.356052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.356060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.356361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.356371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.356644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.356653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.356993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.357001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.357363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.357371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.357664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.357672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.357991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.357999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.358323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.358332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.358621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.358630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.358924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.358933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.359138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.359146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.359339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.359346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.359628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.359636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.359945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.359954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.360170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.360178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.360360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.360369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.360692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.360700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.361029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.361038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.361344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.361352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.361673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.361681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.362060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.362068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.362370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-12-06 11:29:10.362378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-12-06 11:29:10.362718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.362726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.363126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.363134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.363439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.363448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.363750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.363760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.364088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.364097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.364408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.364416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.364728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.364737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.364902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.364910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.365215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.365224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.365535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.365543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.365916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.365925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.366089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.366097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.366426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.366434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.366707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.366716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.367025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.367034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.367364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.367373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.367703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.367712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.368023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.368032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.368353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.368362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.368545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.368558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.368880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.368891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.369210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.369218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.369517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.369525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.369810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.369818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.370134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.370143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.370463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.370471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.370783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.370791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.371084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.371093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.371391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.371400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.371711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.371721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.372117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.372126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.372451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.372460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.372783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.372791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.373094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.373103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.373417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.373425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.373722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.373730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.374020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.374028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.374339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-12-06 11:29:10.374347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-12-06 11:29:10.374546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.374555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.374770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.374779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.375058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.375067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.375434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.375442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.375761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.375769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.375838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.375845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.375876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.473 [2024-12-06 11:29:10.375905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.473 [2024-12-06 11:29:10.375913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.473 [2024-12-06 11:29:10.375919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.473 [2024-12-06 11:29:10.375925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.473 [2024-12-06 11:29:10.376121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.376130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.376422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.376430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.376749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.376757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.376936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.376945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.377210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.377218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.377563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.377573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.377524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:04.473 [2024-12-06 11:29:10.377689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.377698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.377692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:04.473 [2024-12-06 11:29:10.377807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:04.473 [2024-12-06 11:29:10.377808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:04.473 [2024-12-06 11:29:10.378028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.378036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.378233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.378241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.378546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.378555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.378899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.378908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.379220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.379227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.379541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.379549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.379722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.379731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.380010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.380018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.380326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.380333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.380650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.380658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.380953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.380961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.381167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.381176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.381480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.381489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.381671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.381680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.381897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.381906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.382256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.382264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.382597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.382606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.382916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.382925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.383101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.383111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.383415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.383423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.383731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.383740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.384081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-12-06 11:29:10.384090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-12-06 11:29:10.384439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.384448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.384757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.384766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.385067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.385075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.385374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.385382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.385561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.385570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.385879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.385887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.386212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.386220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.386541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.386550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.386871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.386879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.387055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.387063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.387356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.387365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.387524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.387532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.387766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.387774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.388093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.388101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.388414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.388423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.388725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.388735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.388893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.388903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.389269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.389277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.389464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.389473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.389786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.389795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.390053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.390061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.390227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.390237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.390451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.390459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.390777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.390785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.391096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.391105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.391285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.391293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.391461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.391470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.391875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.391883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.392202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.392210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.392527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.392535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.392588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.392597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.392896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.392906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.393221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.393229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.393546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.393556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.393868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.393878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.394201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.394209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.394561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.394571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.394889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-12-06 11:29:10.394897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-12-06 11:29:10.395218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.395227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.395533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.395541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.395702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.395710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.395897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.395906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.396093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.396102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.396277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.396287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.396480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.396489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.396789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.396798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.396998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.397006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.397324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.397333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.397645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.397654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.397817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.397825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.398198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.398207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.398545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.398553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.398885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.398894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.399238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.399247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.399566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.399575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.399884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.399894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.400264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.400271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.400575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.400584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.400881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.400889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.401069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.401078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.401414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.401422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.401738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.401746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.401789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.401795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.402015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.402024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.402328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.402337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.402635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.402643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.402956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.402965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.403295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.403304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.403628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.403637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.403940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.403949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.404123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.404131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.404430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.404439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.404749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.404758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.405067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.405077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.405266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.405274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.405590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-12-06 11:29:10.405598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-12-06 11:29:10.405908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.405916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.406237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.406245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.406555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.406563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.406733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.406740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.406916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.406924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.407219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.407227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.407560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.407568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.407876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.407885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.408169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.408177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.408508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.408516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.408685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.408693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.408877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.408885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.409245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.409253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.409569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.409577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.409767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.409776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.410104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.410112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.410309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.410318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.410656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.410664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.410981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.410990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.411300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.411309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.411604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.411612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.411926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.411935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.412030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.412037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.412195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.412203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.412530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.412539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.412854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.412865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.413202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.413210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.413518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.413528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.413701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.413709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.414080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.414089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.414435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.414444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.414615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.414623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.414944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.414952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.415144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.415153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-12-06 11:29:10.415492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-12-06 11:29:10.415500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.415816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.415824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.415986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.415995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.416270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.416278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.416540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.416548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.416730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.416738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.417065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.417073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.417406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.417415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.417595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.417602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.417836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.417844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.418162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.418171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.418344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.418352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.418477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.418485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.418536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.418542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.418715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.418723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.418804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.418811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.419205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.419213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.419389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.419397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.419736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.419745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.419933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.419941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.420122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.420129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.420451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.420459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.420787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.420796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.421018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.421026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.421353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.421361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.421735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.421742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.422076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.422084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.422252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.422269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.422434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.422442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.422701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.422710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.422891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.422899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.423109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.423117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.423153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.423160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.423495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.423508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.423819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.423827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.423993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.424002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.424346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.424354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.424523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.424531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-12-06 11:29:10.424832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-12-06 11:29:10.424840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.424880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.424887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.425195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.425203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.425578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.425587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.425820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.425829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.426054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.426062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.426402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.426410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.426719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.426728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.427106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.427114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.427299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.427306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.427623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.427631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.427925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.427933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.428223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.428231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.428520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.428529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.428832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.428840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.429025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.429033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.429211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.429219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.429295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.429302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.429561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.429569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.429904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.429913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.430241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.430249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.430566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.430574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.430772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.430780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.431112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.431120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.431422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.431430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.431757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.431765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.432063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.432071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.432234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.432241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.432559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.432567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.432894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.432903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.433083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.433090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.433273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.433283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.433621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.433630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.433806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.433815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.434024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.434033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.434316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.434327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.434664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.434672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.434831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.434838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.435035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-12-06 11:29:10.435043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-12-06 11:29:10.435374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.435382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.435702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.435709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.436031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.436039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.436395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.436403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.436745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.436754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.437051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.437060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.437247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.437256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.437424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.437432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.437620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.437630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.437971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.437980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.438300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.438308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.438488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.438496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.438805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.438813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.438988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.438996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.439195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.439203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.439527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.439535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.439744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.439751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.439967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.439975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.440290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.440299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.440617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.440625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.440810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.440817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.440996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.441005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.441300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.441309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.441482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.441489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.441830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.441839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.442171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.442180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.442518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.442526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.442741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.442750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.442907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.442915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.443145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.443153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.443323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.443331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.443494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.443501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.443815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.443823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.444165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.444173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.444374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.444382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.444638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.444646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.444684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.444692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.445015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-12-06 11:29:10.445024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-12-06 11:29:10.445341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.445349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.445512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.445519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.445682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.445690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.445993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.446001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.446358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.446366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.446688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.446696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.447028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.447036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.447347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.447355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.447529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.447536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.447699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.447708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.448009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.448018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.448335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.448343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.448665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.448673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.448852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.448860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.449197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.449205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.449535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.449543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.449878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.449886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.450187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.450195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.450513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.450520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.450815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.450823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.451028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.451038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.451321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.451329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.451668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.451676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.451995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.452003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.452328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.452336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.452648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.452656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.452968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.452977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.453141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.453149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.453493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.453501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.453856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.453867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.454036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.454043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.454235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.454244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.454558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.454566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.454877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.454886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-12-06 11:29:10.455242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-12-06 11:29:10.455250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.455555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.455562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.455776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.455785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.455975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.455983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.456170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.456180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.456519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.456527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.456686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.456693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.456887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.456896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.457180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.457189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.457531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.457539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.457850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.457858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.458049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.458058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.458374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.458381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.458601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.458610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.458924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.458932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.459103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.459111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.459421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.459429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.459758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.459766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.460067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.460075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.460380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.460388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.460568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.460577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.460801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.460809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.461130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.461138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.461312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.461319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.461650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.461658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.461829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.461837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.462119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.462127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.462381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.462388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.462711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.462719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.462763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.462770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.463068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.463076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.463426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.463435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.463743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.463752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.464098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.464106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.464437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.464445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.464620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.464628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.464780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.464787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.464966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.464973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-12-06 11:29:10.465308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-12-06 11:29:10.465316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.465646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.465654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.465926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.465934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.466131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.466138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.466470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.466478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.466630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.466638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.466952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.466962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.467285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.467293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.467468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.467476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.467706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.467714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.467992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.468000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.468349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.468357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.468519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.468526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.468713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.468722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.468886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.468894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.469221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.469229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.469413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.469422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.469711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.469720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.469901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.469909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.470128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.470137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.470292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.470300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.470541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.470549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.470869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.470877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.471258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.471267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.471444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.471452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.471748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.471756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.472107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.472115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.472318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.472326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.472512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.472519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.472713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.472721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.472934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.472941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.473157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.473166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.473262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.473270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0784000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.473381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239c030 is same with the state(6) to be set 00:30:04.482 [2024-12-06 11:29:10.473740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.473759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.474114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.474128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.474304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.474314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.474652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.474663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.474992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-12-06 11:29:10.475003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-12-06 11:29:10.475218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.475230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.475516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.475527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.475756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.475768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.476066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.476077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.476390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.476401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.476709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.476720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.477031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.477043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.477349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.477360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.477737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.477748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.478089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.478102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.478280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.478291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.478455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.478465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.478760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.478771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.479034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.479045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.479256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.479267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.479589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.479600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.479934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.479946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.480281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.480292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.480636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.480648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.480837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.480848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.481147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.481159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.481335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.481346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.481554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.481565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.481885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.481897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.482088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.482099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.482425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.482435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.482742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.482753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.482804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.482814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.483112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.483126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.483443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.483454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.483792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.483803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.483979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.483991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.484159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.484169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.484452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.484463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.484779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.484791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.485047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.485059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.485237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.485250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.485430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.485443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-12-06 11:29:10.485622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-12-06 11:29:10.485634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.485951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.485962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.486291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.486302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.486435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.486446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.486793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.486803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.487113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.487125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.487312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.487324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.487665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.487676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.487853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.487869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.488039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.488050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.488389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.488400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.488706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.488717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.489039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.489050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.489384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.489395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.489703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.489714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.490076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.490088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.490263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.490277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.490322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.490333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.490656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.490667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.490986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.490998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.491286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.491297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.491473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.491485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.491774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.491785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.492073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.492084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.492393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.492406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.492718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.492729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.492901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.492913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.493077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.493088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.493472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.493482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.493818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.493829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.494161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.494172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.494399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.494411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.494697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.494708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.494930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.494941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.495025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.495034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.495332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.495343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.495672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.495683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.495865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.495876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.496217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-12-06 11:29:10.496228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-12-06 11:29:10.496520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.496530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.496848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.496859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.496913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.496922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.497081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.497092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.497453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.497464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.497777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.497788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.498100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.498111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.498310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.498321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.498695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.498705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.499035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.499046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.499379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.499390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.499708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.499718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.500022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.500035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.500218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.500229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.500410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.500421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.500607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.500618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.500968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.500980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.501273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.501284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.501566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.501577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.501929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.501940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.502223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.502233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.502540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.502551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.502811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.502821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.503138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.503149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.503528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.503540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.503720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.503730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.503941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.503953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.504160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.504171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.504345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.504357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.504663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.504673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.504978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.504989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.505273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.505284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.505469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.505479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.505777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.505788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-12-06 11:29:10.506051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-12-06 11:29:10.506061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.506374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.506384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.506721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.506732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.507046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.507057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.507206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.507217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.507502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.507513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.507833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.507845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.508193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.508205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.508542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.508554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.508879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.508892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.509227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.509238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.509414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.509426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.509616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.509626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.509818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.509829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.510025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.510036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.510329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.510339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.510491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.510501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.510697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.510707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.510884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.510897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.511075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.511086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.511277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.511288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.511590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.511600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.511760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.511770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.512103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.512114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.512425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.512436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.512745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.512755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.513021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.513032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.513092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.513102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.513255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.513267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.513562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.513574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.513735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.513745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.513917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.513927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.514219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.514230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.514412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.514423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.514756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.514768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.515090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.515101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.515405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.515417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.515577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.515587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.515926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-12-06 11:29:10.515937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-12-06 11:29:10.516275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.516286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.516640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.516651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.516954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.516965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.517272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.517283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.517606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.517617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.517803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.517816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.518011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.518022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.518349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.518362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.518659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.518670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.518998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.519009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.519343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.519354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.519509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.519519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.519838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.519848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.520160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.520171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.520439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.520451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.520776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.520787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.520972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.520983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.521263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.521274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.521455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.521467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.521782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.521792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.521845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.521855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.522188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.522198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.522416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.522427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.522741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.522752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.523062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.523074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.523391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.523402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.523575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.523587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.523633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.523644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.523966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.523977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.524281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.524292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.524477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.524488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.524661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.524672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.524854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.524867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.525207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.525219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.525408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.525424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.525800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.525811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.525986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.525997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.526303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.526314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-12-06 11:29:10.526643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-12-06 11:29:10.526653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.526946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.526957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.527159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.527169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.527387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.527398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.527587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.527598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.527648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.527657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.527960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.527971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.528308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.528319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.528515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.528526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.528839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.528849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.529027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.529039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.529207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.529219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.529510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.529520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.529930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.529941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.530244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.530255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.530530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.530540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.530853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.530867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.531173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.531184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.531313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.531323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.531508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.531520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.531703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.531715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.531906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.531919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.532252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.532263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.532577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.532590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.532766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.532777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.533099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.533111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.533428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.533438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.533730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.533741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.534074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.534086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.534283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.534293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.534599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.534609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.534794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.534806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.535009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.535020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.535348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.535359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.535674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.535685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.535990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.536001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.536187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.536198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.536413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.536425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-12-06 11:29:10.536586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-12-06 11:29:10.536598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.536846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.536857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.537151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.537162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.537550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.537562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.537905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.537916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.538235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.538246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.538421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.538433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.538763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.538774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.538950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.538962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.539249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.539259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.539601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.539612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.539965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.539976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.540159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.540171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.540345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.540356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.540415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.540424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.540580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.540591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.540919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.540930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.541226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.541237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.541551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.541562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.541868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.541879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.542203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.542213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.542509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.542521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.542830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.542841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.543147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.543159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.543426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.543438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.543755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.543766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.544066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.544080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.544362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.544373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.544680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.544691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.545007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.545018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.545246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.545256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.545561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.545572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.545904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.545915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.546231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.546241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.546556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.546568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-12-06 11:29:10.546905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-12-06 11:29:10.546916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.547103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.547114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.547416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.547426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.547717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.547727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.547911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.547922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.548012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.548023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.548336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.548347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.548654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.548665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.548841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.548853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.549182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.549193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.549525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.549535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.549740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.549751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.550066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.550078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.550342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.550353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.550544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.550554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.550720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.550730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.551082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.551093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.551275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.551287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.551475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.551488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.551758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.551769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.552109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.552120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.552452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.552463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.552648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.552659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.552941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.552952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.553274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.553285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.553624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.553635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.553880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.553891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.554178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.554188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.554452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.554462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.554771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.554782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.554972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.554984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.555180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.555191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.555500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.555511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.555787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.555798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.556097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.556108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.556281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.556292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.556359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.556369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.556667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.556677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.556983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-12-06 11:29:10.556994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-12-06 11:29:10.557270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.557281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.557579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.557591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.557960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.557972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.558299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.558310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.558611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.558622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.558932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.558943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.559116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.559130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.559471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.559481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.559827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.559838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.560145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.560156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.560358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.560370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.560700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.560710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.561013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.561024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.561327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.561337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.561522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.561534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.561758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.561769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.562104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.562115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.562268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.562279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.562482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.562493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.562680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.562691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.563004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.563015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.563309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.563320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.563605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.563616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.563959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.563971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.564150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.564162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.564350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.564360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.564665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.564676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.564979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.564991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.565170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.565181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.565379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.565390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.565711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.565721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.566012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.566023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.566197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.566207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.566382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.566393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.566596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.566607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.566767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.566779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.567021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.567033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.567305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.567315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-12-06 11:29:10.567611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-12-06 11:29:10.567622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.567843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.567854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.568174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.568185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.568541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.568552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.568867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.568878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.569167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.569178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.569496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.569506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.569793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.569803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.570122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.570133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.570471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.570482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.570669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.570681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.571000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.571011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.571308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.571320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.571622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.571633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.571939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.571950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.572144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.572155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.572467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.572478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.572819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.572830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.573032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.573043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.573377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.573388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.573686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.573697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.574042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.574053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.574320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.574330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.574517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.574528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.574838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.574849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.575162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.575173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.575477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.575488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.575821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.575832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.576012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.576023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.576305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.576315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.576506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.576518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.576745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.576755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.577078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.577090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.577151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.577160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.577438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.577448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.577757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.577767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.577867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.577879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.578166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.578176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.492 [2024-12-06 11:29:10.578554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.492 [2024-12-06 11:29:10.578565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.492 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.578875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.578887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.579069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.579079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.579293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.579304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.579612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.579622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.579927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.579938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.580237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.580248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.580432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.580443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.580616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.580628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.580814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.580825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.581109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.581120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.581390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.581401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.581776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.581788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.581893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.581903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.582185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.582195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.582558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.582568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.582764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.582775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.583086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.583098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.583396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.583407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.583694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.583705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.583929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.583940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.584102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.584113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.584443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.584454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.584495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.584504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.584817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.584828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.585027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.585041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.585367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.585378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.585684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.585694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.586008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.586020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.586209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.586221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.586514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.586524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.586701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.586712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.587014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.587025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.587317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.587327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.587623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.587633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.587797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.493 [2024-12-06 11:29:10.587807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.493 qpair failed and we were unable to recover it. 00:30:04.493 [2024-12-06 11:29:10.588136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.588147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.588434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.588446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.588631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.588642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.588983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.588994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.589176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.589187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.589504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.589515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.589674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.589685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.589906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.589917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.590224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.590235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.590545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.590555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.590762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.590773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.591025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.591037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.591373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.591383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.591695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.591706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.591883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.591895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.592203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.592213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.592514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.592527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.592709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.592721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.593039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.593050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.593377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.593388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.593721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.593732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.593892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.593903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.594231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.594242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.594580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.594591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.594783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.594794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.594989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.595000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.595178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.595190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.595530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.595541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.595854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.595868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.596191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.596202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.596521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.596532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.596798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.596808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.597166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.597177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.597502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.494 [2024-12-06 11:29:10.597513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.494 qpair failed and we were unable to recover it. 00:30:04.494 [2024-12-06 11:29:10.597825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.597837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.598018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.598030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.598199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.598210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.598526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.598537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.598923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.598936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.599260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.599271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.599545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.599555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.599890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.599901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.600251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.600262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.600472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.600483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.600818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.600829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.601141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.601152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.601337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.601348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.601682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.601693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.601957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.601968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.602297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.602308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.602639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.602650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.602889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.602901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.603235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.603246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.603511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.603522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.603868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.603880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.604232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.604242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.604423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.604434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.604762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.604775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.604946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.604959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.605274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.605285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.605544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.605555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.605760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.605771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.606077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.606088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.606421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.606433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.606759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.606770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.607068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.607080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.607296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.607308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.607609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.607619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.607894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.607905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.608241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.608252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.608555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.608566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.608794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.495 [2024-12-06 11:29:10.608805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.495 qpair failed and we were unable to recover it. 00:30:04.495 [2024-12-06 11:29:10.609099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.496 [2024-12-06 11:29:10.609111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.496 qpair failed and we were unable to recover it. 00:30:04.496 [2024-12-06 11:29:10.609443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.496 [2024-12-06 11:29:10.609453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.496 qpair failed and we were unable to recover it. 00:30:04.496 [2024-12-06 11:29:10.609632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.496 [2024-12-06 11:29:10.609645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.496 qpair failed and we were unable to recover it. 00:30:04.496 [2024-12-06 11:29:10.609825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.496 [2024-12-06 11:29:10.609836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.496 qpair failed and we were unable to recover it. 00:30:04.496 [2024-12-06 11:29:10.610153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.496 [2024-12-06 11:29:10.610164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.496 qpair failed and we were unable to recover it. 00:30:04.496 [2024-12-06 11:29:10.610359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.496 [2024-12-06 11:29:10.610370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.496 qpair failed and we were unable to recover it. 00:30:04.496 [2024-12-06 11:29:10.610688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.496 [2024-12-06 11:29:10.610699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.610885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.610899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.611140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.611151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.611465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.611476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.611661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.611672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.612013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.612024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.612203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.612216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.612542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.612553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.612834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.612845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.613144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.613155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.613471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.613483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.613780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.613791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.614121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.614132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.614313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.614325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.614615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-12-06 11:29:10.614626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-12-06 11:29:10.614675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.614685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.615004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.615016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.615299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.615310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.615435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.615445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.615774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.615786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.615981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.615993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.616308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.616319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.616499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.616510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.616693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.616702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.616878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.616889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.617071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.617082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.617214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.617225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.617406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.617419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.617753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.617764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.618094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.618105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.618297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.618308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.618643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.618654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.618961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.618972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.619266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.619279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.619601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.619613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.619747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.619758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.619980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.619991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-12-06 11:29:10.620294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-12-06 11:29:10.620306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.620480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.620490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.620854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.620869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.621144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.621156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.621436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.621446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.621590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.621601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.621933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.621944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.622238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.622249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.622424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.622436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.622625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.622638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.622959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.622971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.623342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.623352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.623666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.623677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.624003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.624014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.624352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.624362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.624707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.624718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.624907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.624919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.625223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.625234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.625398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.625410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.625737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.625747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.625950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.625961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.626131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-12-06 11:29:10.626142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-12-06 11:29:10.626389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.626399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.626573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.626585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.626890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.626901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.627194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.627205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.627546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.627557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.627864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.627874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.628061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.628072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.628263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.628273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.628587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.628598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.628747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.628757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.629047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.629058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.629245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.629255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.629579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.629589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.629758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.629770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.630083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.630094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.630377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.630388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.630628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.630639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.630827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.630838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.631169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.631180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.631383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.631395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.631740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.631751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.632092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.632103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.632389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.632400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-12-06 11:29:10.632709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-12-06 11:29:10.632721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.633037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.633048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.633342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.633353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.633539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.633550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.633717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.633728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.634060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.634073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.634382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.634394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.634479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.634491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.634760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.634772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.635069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.635080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.635272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.635283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.635568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.635580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.635881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.635893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.635935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.635944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.636252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.636263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.636542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.636554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.636897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.636909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.637101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.637112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.637285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.637296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.637351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.637367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.637629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.637639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.637974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.637985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.638299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.638311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-12-06 11:29:10.638647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-12-06 11:29:10.638657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.638824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.638835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.639252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.639265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.639487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.639498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.639827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.639837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.640170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.640182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.640482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.640493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.640826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.640838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.641109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.641121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.641423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.641435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.641717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.641729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.641948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.641959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.642019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.642028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.642376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.642387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.642562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.642574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.642752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.642763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.643077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.643088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.643422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.643433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.643732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.643743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.644017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.644028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.644363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.644374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.644423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.644433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.644645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.644656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.644960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.644973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.645187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.645199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.645367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.645379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.645692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-12-06 11:29:10.645702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-12-06 11:29:10.645745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.645754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.646062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.646074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.646386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.646397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.646590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.646602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.646873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.646885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.647177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.647187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.647487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.647498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.647677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.647688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.647962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.647974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.648271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.648282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.648597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.648608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.648915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.648926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.649105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.649117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.649303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.649314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.649484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.649496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.649828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.649839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.650154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.650165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.650467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.650478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.650653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.650664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.650897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.650909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.651208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.651220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.651501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.651513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.651711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.651722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.652006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.652019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.652311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.652322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.652632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.652643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.652829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.652839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.653003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.653014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.653329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.653340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.653549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.653560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.653885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.653896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.654153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.654164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.654338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.654350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-12-06 11:29:10.654546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-12-06 11:29:10.654558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.654907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.654918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.655115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.655125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.655441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.655452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.655640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.655651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.655833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.655844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.656231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.656243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.656555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.656566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.656879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.656891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.657191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.657202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.657513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.657524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.657833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.657845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.658026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.658038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.658204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.658216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.658518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.658529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.658852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.658867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.659146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.659158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.659511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.659522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.659757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.659768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.659954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.659966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.660266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.660276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.660623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.660634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.660990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.661002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.661043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.661052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.661230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.661241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.661585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.661596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.661925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.661936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.662149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.662160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.662521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.662533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.662833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.662843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.663065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.663077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.663412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.663426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.663764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.663775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.664069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.664081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.664265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.664277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.664626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-12-06 11:29:10.664636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-12-06 11:29:10.664891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.664904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.665083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.665094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.665412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.665423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.665770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.665782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.666090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.666101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.666407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.666418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.666734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.666745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.667080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.667092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.667404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.667415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.667750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.667761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.668057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.668068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.668260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.668273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.668358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.668369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.668715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.668726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.669035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.669046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.669404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.669415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.669608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.669620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.669802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.669814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.670000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.670012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.670309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.670320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.670507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.670518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.670680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.670692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.670979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.670992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.671207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.671218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.671381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.671391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.671583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.671593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.671887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.671899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.672228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.672239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.672555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.672567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.672741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.672753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.673029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.673041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.673374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.673386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.673688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.673699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.673877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.673888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.674228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.674239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.674562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.674574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.674777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.674788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.675087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.675098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.675408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.675419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.675728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.675740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.676039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.676053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.676356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.676367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.676553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.676565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.676867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.676879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.677162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.677174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.677464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.677475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.677695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.677707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.678024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.678035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.678362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.678373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.678587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.678601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.678929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.678940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.679128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.679140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.679327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.679337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.679688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.679699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.679939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.679950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.680273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.680285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.680588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.680600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.680785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.680797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.681197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.681208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.681519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.681530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.681712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.681724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.681897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.681910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.682082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.682093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.682428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.682440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.682786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.682797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.683110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.683121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-12-06 11:29:10.683298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-12-06 11:29:10.683310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.683646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.683657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.684004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.684015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.684319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.684331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.684519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.684530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.684869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.684881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.685102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.685113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.685480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.685492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.685858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.685873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.686165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.686176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.686482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.686494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.686829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.686841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.687155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.687166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.687555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.687567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.687751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.687764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.687950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.687962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.688250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.688262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.688437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.688449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.688640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.688652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.688820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.688833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.689139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.689152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.689481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.689493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.689809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.689820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.690016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.690029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.690363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.690375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.690699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.690710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.691082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.691094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.691399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.691410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.691562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.691581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.691754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.691765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.692049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.692061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.692386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.692397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.692764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.692777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.692819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.692830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.692924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.692936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.693110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.693121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.693408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.693419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.693729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.693740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.693937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.693949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.694242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.694253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.694570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.694582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.694901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.694913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.695092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.695103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.695352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.695362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.695673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.695683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.695839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.695851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.696217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.696228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.696549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.696560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.696870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.696882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.697204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.697216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.697493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.697504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.697823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.697836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.698224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.698236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.698589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.698600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.698902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.698913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.699091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.699102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.699415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.699426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.699721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.699732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.699906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.699919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.700311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.700323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.700517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.700529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.700923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.700935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.701216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.701228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.701451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.701462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.701513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.701522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.701710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.701722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.702043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.702054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-12-06 11:29:10.702327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-12-06 11:29:10.702339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.702691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.702702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.702849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.702860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.703060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.703071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.703267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.703278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.703445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.703457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.703725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.703736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.703906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.703918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.704085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.704095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.704397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.704407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.704746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.704757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.704941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.704956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.705121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.705131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.705445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.705456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.705605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.705618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.706006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.706017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.706321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.706332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.706648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.706659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.706996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.707009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.707372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.707384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.707573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.707584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.707892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.707904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.708098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.708108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.708428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.708439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.708623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.708635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.708955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.708966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.709192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.709203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.709534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.709545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.709845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.709856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.710129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.710141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.710469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.710480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.710818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.710829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.711024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.711036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.711420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.711431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.711715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.711725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.712017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.712028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.712210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.712222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.712557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.712567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.712870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.712882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.713223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.713234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.713542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.713553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.713872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.713883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.714078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.714090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.714419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.714431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.714743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.714753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.715063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.715075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.715427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.715439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.715484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.715495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.715806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.715817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.716145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.716156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.716495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.716507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.716816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.716827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.717008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.717021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.717339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.717350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.717654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.717665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.718015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.718027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.718312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.718322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.718630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.718641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.718711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.718722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.718892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.718903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.719222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.719233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.719543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-12-06 11:29:10.719555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-12-06 11:29:10.719870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.719882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.720222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.720234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.720555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.720567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.720740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.720750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.720988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.721000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.721213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.721225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.721406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.721417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.721613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.721624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.721810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.721822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.721871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.721883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.722195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.722205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.722544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.722555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.722733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.722744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.723073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.723084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.723409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.723419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.723739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.723751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.724083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.724094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.724432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.724445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.724757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.724768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.724978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.724990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.725330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.725341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.725641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.725653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.725991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.726003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.726204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.726216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.726510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.726521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.726828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.726839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.727179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.727192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.727369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.727380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.727569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.727579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.727989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.728001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.728322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.728333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.728517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.728528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.728685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.728697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.728998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.729010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.729194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.729206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.729514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.729525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.729836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.729847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.730019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.730031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.730207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.730217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.730547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.730558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.730947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.730959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.731153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.731165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.731339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.731349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.731646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.731656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.731966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.731980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.732155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.732167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.732448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.732459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.732761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.732772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.733079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.733090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.733400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.733411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.733612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.733623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.733929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.733940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.734253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.734264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.734571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.734582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.734951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.734963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.735139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.735150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.735374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.735386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.735712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.735723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.736065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.736076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.736376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.736387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.736705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.736716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.737027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.737039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.737223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.737234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.737506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.737517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.737907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.737919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-12-06 11:29:10.738277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-12-06 11:29:10.738288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.738613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.738624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.738814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.738825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.739118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.739130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.739435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.739447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.739740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.739751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.740046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.740060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.740242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.740254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.740446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.740457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.740625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.740637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.740931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.740943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.741271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.741282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.741446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.741459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.741675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.741685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.741857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.741873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.742189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.742201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.742372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.742384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.742649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.742660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.742989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.743000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.743336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.743348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.743714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.743725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.743897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.743908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.744112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.744123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.744310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.744321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.744629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.744640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.744799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.744810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.745007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.745018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.745235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.745247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.745556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.745566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.745859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.745880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.746188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.746200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.746366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.746378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.746646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.746657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.746952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.746964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.747041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.747051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.747350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.747360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.747735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.747746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.747923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.747935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.748104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.748116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.748307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.748318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.748635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.748646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.748980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.748992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.749307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.749318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.749630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.749642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.749692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.749702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.750000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.750012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.750302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.750313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.750599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.750611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.750915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.750927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.751103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.751114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.751289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.751300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.751622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.751632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.751790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.751802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.752116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.752127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.752308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.752319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.752636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.752647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.752829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.752841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.753072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.753083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.753413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.753424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.753799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.753810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.754119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.754131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.754449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.754460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.754750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.754761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-12-06 11:29:10.755080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-12-06 11:29:10.755091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.755271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.755283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.755590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.755600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.755913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.755924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.756255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.756267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.756608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.756619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.756806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.756816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.757112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.757123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.757467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.757478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.757753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.757765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.757806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.757815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.758131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.758145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.758489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.758500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.758855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.758869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.759170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.759181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.759320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.759331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.759647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.759658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.759944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.759957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.760124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.760136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.760445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.760456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.760757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.760768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.761061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.761072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.761194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.761204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.761503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.761514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.761685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.761696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.761991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.762002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.762277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.762288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.762623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.762634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.762924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.762935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.763142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.763154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.763199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.763210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.763540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.763550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.763891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.763903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.764215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.764226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.764525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.764535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.764843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.764854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.765054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.765065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.765362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.765374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.765733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.765747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.766077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.766088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.766262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.766275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.766492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.766503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.766757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.766768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.767096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.767107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.767445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.767456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.767641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.767652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.767947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.767958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.768139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.768151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.768469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.768480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.768789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.768801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.768966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.768977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.769274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.769284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.769440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.769451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.769782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.769793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.770102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.770114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.770421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.770432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.770734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.770744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.770967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.770978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.771319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.771329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.771658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.771669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.772041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.772053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.772229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.772240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.772576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.772587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.772901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.772913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.773236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.773247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.773428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.773439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-12-06 11:29:10.773727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-12-06 11:29:10.773738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.773909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.773920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.774196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.774206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.774556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.774567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.774878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.774890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.775205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.775216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.775533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.775544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.775816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.775827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.776020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.776032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.776226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.776238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.776545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.776556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.776854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.776870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.777201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.777211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.777528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.777539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.777885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.777896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.778218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.778229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.778539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.778549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.778904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.778916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.779275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.779286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.779602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.779614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.779797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.779807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.780026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.780038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.780371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.780382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.780678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.780689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.780735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.780744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.781053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.781065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.781227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.781240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.781556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.781566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.781898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.781909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.782261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.782272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.782614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.782625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.782934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.782946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.783117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.783129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.783288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.783299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.783346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.783357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.783657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.783668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.783976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.783988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.784309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.784320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.784613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.784624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.784974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.784986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.785161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.785175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.785516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.785527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.785738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.785749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.786058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.786070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.786388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.786399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.786733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.786744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.787174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.787185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.787339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.787350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.787632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.787643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.787693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.787702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.788016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.788027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.788356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.788368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.788671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.788682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.789000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.789011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.789324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.789335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.789646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.789657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.789940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.789953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.790327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.790339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.790581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.790592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.790926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.790937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.791284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.791294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.791603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.791614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.791990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.792001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.792359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.792371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.792700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.792711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.792825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.792835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.793163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-12-06 11:29:10.793174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-12-06 11:29:10.793363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.793376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.793694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.793705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.794023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.794034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.794215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.794226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.794412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.794423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.794729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.794740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.795052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.795063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.795389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.795400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.795755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.795765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.795940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.795951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.796130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.796141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.796451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.796462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.796803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.796814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.797103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.797114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.797525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.797536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.797873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.797884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.798210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.798221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.798555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.798566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.798773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.798784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.799077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.799089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.799426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.799437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.799709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.799720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.800049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.800060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.800382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.800393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.800574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.800586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.800901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.800912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.801097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.801107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.801306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.801320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.801629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.801639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.802014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.802025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.802348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.802359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.802644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.802656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.802989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.803000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.803263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.803274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.803577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.803588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.803804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.803815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.804138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.804149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.804324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.804336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.804654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.804665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.804868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.804880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.804926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.804937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.805224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.805235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.805412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.805422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.805599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.805610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.805781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.805794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.806091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.806102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.806398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.806409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.806726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.806737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.807031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.807042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.807352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.807363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.807668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.807679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.807981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.807993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.808166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.808176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.808358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.808368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.808566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.808578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.808869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.808881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.809203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.809214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.809382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.809394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.809703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.809714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.810017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.810029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.810362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.810372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-12-06 11:29:10.810675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-12-06 11:29:10.810686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.810993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.811004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.811345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.811356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.811662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.811673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.812010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.812021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.812361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.812372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.812683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.812695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.813002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.813013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.813331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.813342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.813647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.813658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.813939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.813950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.814132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.814144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.814466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.814477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.814780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.814790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.814961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.814973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.815276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.815286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.815472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.815482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.815780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.815791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.816071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.816083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.816420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.816431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.816614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.816625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.816800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.816811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.816973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.816985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.817329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.817339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.817664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.817675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.817983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.817995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.818298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.818309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.818468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.818479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.818814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.818825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.819147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.819157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.819522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.819532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.819847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.819859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.820075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.820087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.820402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.820413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.820600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.820614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.820778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.820788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.821075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.821086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.821397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.821408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.821706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.821717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.822030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.822042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.822212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.822223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.822432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.822443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.822738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.822749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.822927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.822937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.823286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.823296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.823347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.823356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.823631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.823642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.823840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.823851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.824164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.824176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.824445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.824455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.824614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.824628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.824926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.824938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.825192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.825204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.825537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.825549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.825859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.825875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.826092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.826103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.826428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.826438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.826712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.826724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.827025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.827036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.827373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.827383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.827608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.827619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.827946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.827960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.828293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.828305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.828482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.828495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.828829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-12-06 11:29:10.828840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-12-06 11:29:10.828884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.828894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.829058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.829069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.829391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.829402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.829712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.829722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.829884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.829895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.830228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.830239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.830543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.830555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.830741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.830753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.831070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.831081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.831269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.831280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.831560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.831571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.831923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.831934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.832107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.832118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.832443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.832454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.832647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.832658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.832971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.832983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.833313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.833324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.833622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.833634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.833970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.833981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.834331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.834342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.834518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.834528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.834828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.834840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.835022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.835034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.835315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.835326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.835489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.835501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.835755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.835765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.836069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.836080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.836336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.836347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.836656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.836667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.836970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.836982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.837317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.837327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.837611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.837622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.837959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.837970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.838277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.838288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.838464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.838474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.838780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.838792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.839093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.839105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.839430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.839442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.839634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.839645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.839964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.839976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.840147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.840159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.840288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.840299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.840637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.840649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.840837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.840849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.841166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.841179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.841490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.841501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.841816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.841827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.842034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.842046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.842354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.842366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.842676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.842687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.843007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.843019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.843346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.843358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.843665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.843678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.844018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.844029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.844198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.844210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.844387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.844398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.844708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.844718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.845031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.845042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.845357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.845367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.845705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.845716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.846063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.846075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.846248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.846259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.846547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.846558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.846895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.846906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.847226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.847240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.847577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-12-06 11:29:10.847589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-12-06 11:29:10.847956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.847967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.848257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.848268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.848428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.848440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.848617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.848628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.848937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.848948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.849246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.849257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.849441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.849452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.849637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.849648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.849972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.849984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.850269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.850280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.850582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.850593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.850767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.850779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.850965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.850977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.851142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.851153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.851479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.851490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.851806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.851817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.852129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.852139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.852318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.852330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.852496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.852507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.852834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.852845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.853151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.853162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.853345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.853357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.853654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.853665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.853997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.854008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.854348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.854360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.854696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.854709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.855088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.855099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.855386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.855399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.855694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.855707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.856102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.856114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.856426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.856437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.856768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.856779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.856934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.856945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.857149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.857160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.857479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.857490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.857825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.857835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.858150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.858161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.858345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.858366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.858535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.858545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.858873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.858886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.859199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.859209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.859512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.859523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.859831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.859842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.860149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.860161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.860325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.860337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.860662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.860673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.861035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.861046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.861332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.861343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.861658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.861670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.861976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.861989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.862302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.862314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.862649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.862660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.862970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.862984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.863277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.863288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.863592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.863603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.863937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.863949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.864240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.864251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.864562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.864573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.864754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.864766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.865076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.865089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.865382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.865394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.865655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-12-06 11:29:10.865668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-12-06 11:29:10.865988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.866000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.866197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.866209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.866523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.866535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.866831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.866842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.867197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.867210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.867575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.867587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.867633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.867644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.867931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.867943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.868270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.868282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.868462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.868474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.868678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.868691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.868993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.869006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.869332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.869344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.869644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.869656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.869938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.869951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.870261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.870272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.870571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.870583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.870897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.870910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.871288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.871300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.871480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.871491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.871828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.871840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.872199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.872212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.872402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.872415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.872590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.872602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.872929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.872942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.873259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.873271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.873582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.873594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.873928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.873941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.874241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.874253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.874470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.874481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.874733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.874745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.875042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.875054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.875384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.875395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.875705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.875717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.875892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.875904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.876239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.876252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.876432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.876444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.876619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.876631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.876803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.876815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.877109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.877121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.877306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.877318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.877650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.877662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.877980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.877992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.878200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.878214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.878543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.878555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.878790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.878802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.879012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.879024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.879247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.879259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.879437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.879449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.879665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.879677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.879885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.879897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.880080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.880092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.880367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.880379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.880677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.880689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.880991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.881004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.881306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.881318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.881645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.881656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.881977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.881989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.882280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.882294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.882636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.882647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.883016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.883028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.883363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.883374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.883550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.883561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.883879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.883890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.884290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.884301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.884653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.884664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-12-06 11:29:10.884847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-12-06 11:29:10.884859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.885192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.885203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.885510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.885521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.885787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.885798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.886121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.886133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.886462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.886473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.886669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.886681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.886873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.886884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.887100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.887113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.887313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.887325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.887498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.887511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.887803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.887814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.888108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.888121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.888300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.888312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.888643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.888653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.888981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.888993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.889403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.889414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.889751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.889762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.890066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.890078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.890390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.890404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.890617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.890628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.890937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.890950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.891109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.891120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.891401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.891412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.891689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.891702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.892046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.892058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.892396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.892407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.892711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.892723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.892987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.892999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.893293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.893304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.893603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.893614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.893776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.893788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.894064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.894075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.894259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.894270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.894576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.894587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.894758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.894768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.894929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.894941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.895151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.895163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.895378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.895389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.895591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.895602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.895786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.895796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.895968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.895979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.896302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.896313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.896646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.896657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.896788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.896799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.897110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.897121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.897452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.897464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.897749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.897760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.898050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.898064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.898260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.898271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.898613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.898624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.898928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.898939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.899222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.899233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.899414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.899426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.899712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.899723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.900037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.900049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.900365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.900376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.900684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.900695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.900756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.900765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.901069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.901080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.901364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.901375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.901550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.901562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.901873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-12-06 11:29:10.901885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-12-06 11:29:10.902083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.902094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.902378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.902390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.902578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.902589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.902934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.902946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.903248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.903259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.903435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.903447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.903763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.903775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.904114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.904126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.904461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.904472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.904764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.904776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.904994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.905006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.905175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.905186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.905346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.905357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.905557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.905568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.905741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.905753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.906105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.906117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.906168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.906178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.906458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.906469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.906723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.906735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.906947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.906960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.907249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.907259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.907569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.907580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.907757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.907768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.908103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.908114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.908418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.908431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.908704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.908715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.908888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.908900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.909072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.909084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.909252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.909264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.909440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.909451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.909760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.909771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.909935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.909947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.910234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.910245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.910582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.910593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.910930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.910941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.911259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.911270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.911583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.911594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.911879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.911890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.912212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.912225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.912532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.912544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.912725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.912736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.913010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.913023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.913172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.913184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.913369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.913381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.913589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.913601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.913796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.913807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.913971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.913982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.914295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.914306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.914614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.914625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.914928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.914939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.915176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.915188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.915446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.915460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.915802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.915813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.916178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.916189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.916500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.916511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.916792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.916803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.917026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.917038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.917365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.917376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.917577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.917588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.917773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.917784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.918124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.918136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.918326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.918337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.918575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.918587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.918876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.918888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.919217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.919228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.919419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-12-06 11:29:10.919431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-12-06 11:29:10.919749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.919761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.919953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.919966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.920141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.920152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.920417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.920428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.920762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.920773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.921180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.921192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.921478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.921491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.921675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.921685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.922005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.922017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.922238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.922250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.922560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.922571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.922753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.922763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.923050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.923066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.923399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.923410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.923619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.923631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.923946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.923958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.924259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.924270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.924560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.924571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.924899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.924911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.925209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.925220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.925551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.925563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.925734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.925745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.925795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.925807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.926131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.926143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.926453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.926464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.926725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.926736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.927054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.927067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.927243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.927255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.927543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.927554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.927833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.927844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.928152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.928164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.928463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.928475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-12-06 11:29:10.928799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-12-06 11:29:10.928811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.929035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.929049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.929393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.929405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.929455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.929464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.929788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.929799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.930099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.930110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.930338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.930349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.930441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.930450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.930732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.930743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.931041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.931053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.931233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.931243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.931529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.931540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.931743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.075 [2024-12-06 11:29:10.931755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-12-06 11:29:10.931935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.931945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.932128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.932139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.932325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.932336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.932586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.932598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.932774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.932785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.932985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.932998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.933288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.933299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.933455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.933465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.933794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.933805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.934005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.934017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.934306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.934318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.934581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.934593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.934804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.934816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.935160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.935171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.935357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.935368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.935494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.935505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.935803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.935814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.936065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.936076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.936255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.936266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.936575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.936587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.936734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.936745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.936990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.937000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.937337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.937349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.937651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.937663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.937967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.076 [2024-12-06 11:29:10.937978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.076 qpair failed and we were unable to recover it. 00:30:05.076 [2024-12-06 11:29:10.938318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.938330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.938518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.938529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.938798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.938809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.939145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.939157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.939349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.939361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.939637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.939648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.939923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.939935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.940277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.940289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.940629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.940640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.940987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.940998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.941287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.941301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.941596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.941607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.941946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.941958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.942298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.942310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.942637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.942648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.942979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.942990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.943303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.943314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.943604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.943616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.943931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.943942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.944234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.944245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.944430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.944442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.944655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.944666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.944969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.944981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.945290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.945302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.945574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.077 [2024-12-06 11:29:10.945584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.077 qpair failed and we were unable to recover it. 00:30:05.077 [2024-12-06 11:29:10.945883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.945895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.946072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.946084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.946293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.946304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.946613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.946624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.946915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.946927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.947240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.947252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.947433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.947445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.947780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.947791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.948105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.948117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.948453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.948465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.948800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.948811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.949117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.949128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.949433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.949447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.949721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.949732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.950039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.950051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.950371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.950382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.953285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.953326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.953644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.953657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.954086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.954125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.954478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.954492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.954828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.954839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.955016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.955028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.955324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.955335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.955621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.955632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.955812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.955824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.956135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.078 [2024-12-06 11:29:10.956148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.078 qpair failed and we were unable to recover it. 00:30:05.078 [2024-12-06 11:29:10.956446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.956458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.956763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.956775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.956951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.956963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.957139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.957151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.957213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.957223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.957523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.957534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.957807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.957819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.958164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.958176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.958485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.958496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.958631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.958642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.958891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.958903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.959202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.959214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.959497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.959509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.959814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.959825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.960256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.960267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.960449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.960461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.960637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.960649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.960821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.960833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.960999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.961011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.961340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.961351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.961580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.961591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.961928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.961940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.962314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.962325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.962621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.962632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.962834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.962845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.963157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.963169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.963304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.963315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.079 qpair failed and we were unable to recover it. 00:30:05.079 [2024-12-06 11:29:10.963860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.079 [2024-12-06 11:29:10.963989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f078c000b90 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.964430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.964468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f078c000b90 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.964770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.964783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.965101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.965114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.965451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.965462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.965782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.965793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.966107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.966119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.966441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.966453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.966643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.966655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.966972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.966983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.967163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.967175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.967502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.967514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.967831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.967842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.968159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.968171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.968451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.968464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.968803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.968815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.968993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.969006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.969289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.969300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.969493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.969505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.969835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.969847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.970063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.970074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.970274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.970286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.970468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.970480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.970685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.970696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.971012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.971024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.971209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.971221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.971386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.971398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.971599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.971613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.971795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.971806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.972095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.972107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.972417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.972429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.972736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.972747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.973076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.973088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.973406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.973418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.973740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.973752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.974078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.974090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.974137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.974148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.974321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.974333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.080 [2024-12-06 11:29:10.974660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.080 [2024-12-06 11:29:10.974672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.080 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.974981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.974994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.975314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.975325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.975651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.975663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.975971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.975983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.976308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.976320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.976635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.976646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.976832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.976845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.977038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.977050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.977374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.977386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.977566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.977579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.977759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.977770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.978057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.978069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.978389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.978400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.978587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.978598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.978814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.978826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.979148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.979163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.979503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.979515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.979729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.979740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.979904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.979915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.980243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.980254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.980557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.980569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.980845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.980856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.981060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.981072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.981241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.981252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.981429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.981440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.981743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.981756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.982101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.982114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.982310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.982321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.982642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.982653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.982871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.982883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.983180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.983192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.983481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.983493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.983753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.983765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.081 [2024-12-06 11:29:10.983917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.081 [2024-12-06 11:29:10.983929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.081 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.984249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.984261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.984491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.984503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.984786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.984798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.984983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.984995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.985270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.985281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.985455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.985466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.985840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.985851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.986044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.986056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.986240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.986253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.986441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.986452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.986688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.986700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.987033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.987045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.987381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.987393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.987699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.987712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.987889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.987900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.988207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.988219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.988558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.988570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.988734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.988745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.989068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.989080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.989384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.989395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.989702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.989713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.990040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.990054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.990254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.990267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.990572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.990584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.990631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.990643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.990926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.990938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.991168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.991179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.991378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.991390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.991561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.991573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.991766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.991777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.992076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.992089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.992306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.992319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.992603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.992614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.992956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.992967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.993259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.993270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.993457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.993467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.993774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.993785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.994096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.082 [2024-12-06 11:29:10.994108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.082 qpair failed and we were unable to recover it. 00:30:05.082 [2024-12-06 11:29:10.994440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.994451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.994638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.994650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.994956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.994969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.995247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.995258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.995319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.995331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.995628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.995639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.995955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.995967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.996271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.996282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.996466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.996477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.996768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.996779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.997079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.997090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.997422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.997434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.997769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.997780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.997992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.998003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.998322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.998333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.998668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.998680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.998989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.999001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.999168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.999180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:10.999565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:10.999657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f078c000b90 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.000077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.000119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f078c000b90 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.000519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.000533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.000849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.000860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.001070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.001081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.001271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.001282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.001638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.001649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.001980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.001991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.002303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.002314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.002400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.002410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.002684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.002695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.002874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.002886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.003194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.003204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.003385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.003396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.003683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.003694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.004032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.004044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.004392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.004403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.004617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.004627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.004954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.004965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.005270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.005281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.083 qpair failed and we were unable to recover it. 00:30:05.083 [2024-12-06 11:29:11.005587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.083 [2024-12-06 11:29:11.005601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.005877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.005889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.006212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.006223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.006535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.006547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.006595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.006606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.006905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.006917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.007111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.007122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.007494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.007505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.007813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.007823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.008151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.008162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.008498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.008510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.008838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.008849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.009061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.009072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.009378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.009388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.009720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.009731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.009922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.009942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.010110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.010120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.010338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.010349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.010535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.010547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.010721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.010732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.011000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.011012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.011301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.011312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.011507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.011518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.011704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.011715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.011924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.011936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.012247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.012258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.012465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.012476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.012648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.012661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.012975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.012986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.013196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.013208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.013410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.013421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.013589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.013601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.013925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.013937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.014277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.014288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.014631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.014642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.014951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.014963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.015136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.015147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.015437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.015448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.015658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.084 [2024-12-06 11:29:11.015669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.084 qpair failed and we were unable to recover it. 00:30:05.084 [2024-12-06 11:29:11.015827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.015837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.016041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.016053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.016376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.016387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.016687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.016699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.017011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.017022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.017407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.017417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.017603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.017614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.017989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.018000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.018295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.018307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.018489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.018501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.018781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.018792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.019112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.019123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.019438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.019449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.019679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.019690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.019999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.020011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.020198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.020211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.020386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.020397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.020693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.020704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.021035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.021047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.021363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.021375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.021690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.021701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.021885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.021897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.022041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.022052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.022375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.022386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.022567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.022578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.022763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.022774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.023167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.023178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.023487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.023498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.023681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.023692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.024014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.024026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.024208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.024218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.024520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.024531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.024718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.024730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.025094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.025106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.025420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.025430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.025630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.025641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.085 qpair failed and we were unable to recover it. 00:30:05.085 [2024-12-06 11:29:11.025964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.085 [2024-12-06 11:29:11.025975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.026255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.026266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.026594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.026606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.026950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.026962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.027153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.027164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.027456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.027468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.027800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.027810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.028120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.028132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.028189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.028199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.028498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.028510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.028813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.028825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.029162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.029174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.029483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.029495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.029838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.029850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.030166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.030177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.086 [2024-12-06 11:29:11.030547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.030561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:05.086 [2024-12-06 11:29:11.030734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.030747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.086 [2024-12-06 11:29:11.031066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.031082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.086 [2024-12-06 11:29:11.031385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.031400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.086 [2024-12-06 11:29:11.031709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.031721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.031875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.031887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.032196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.032207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.032546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.032558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.032897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.032909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.033088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.033100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.033395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.033409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.033718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.033729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.033998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.034010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.034231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.086 [2024-12-06 11:29:11.034242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.086 qpair failed and we were unable to recover it. 00:30:05.086 [2024-12-06 11:29:11.034544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.034556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.034866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.034879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.035089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.035101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.035287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.035297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.035621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.035633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.035906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.035919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.036083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.036094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.036430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.036441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.036604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.036616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.036923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.036934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.037022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.037031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.037488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.037499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.037709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.037720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.037774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.037784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.038098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.038109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.038453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.038464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.038643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.038658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.038990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.039001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.039049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.039058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.039362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.039374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.039702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.039713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.040030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.040042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.040397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.040409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.040727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.040738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.040790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.040799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.040983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.040995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.041171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.041181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.041518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.041529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.041871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.041884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.042089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.042099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.042321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.042333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.042498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.042509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.042803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.042814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.043133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.043145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.043479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.043491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.043658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.043670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.043836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.087 [2024-12-06 11:29:11.043847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.087 qpair failed and we were unable to recover it. 00:30:05.087 [2024-12-06 11:29:11.044180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.044191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.044472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.044484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.044794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.044806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.045143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.045154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.045533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.045544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.045736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.045749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.046084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.046097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.046413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.046425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.046764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.046776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.046988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.047000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.047284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.047295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.047446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.047457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.047764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.047775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.047957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.047969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.048326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.048337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.048517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.048529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.048798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.048810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.049147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.049159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.049460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.049471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.049759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.049771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.050078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.050090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.050461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.050474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.050654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.050666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.050850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.050864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.051065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.051076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.051353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.051366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.051554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.051566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.051897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.051909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.052073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.052083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.052288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.052300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.052460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.052473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.052784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.052795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.053134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.053147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.053325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.053340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.053631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.053642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.053819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.088 [2024-12-06 11:29:11.053832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.088 qpair failed and we were unable to recover it. 00:30:05.088 [2024-12-06 11:29:11.054122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.054133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.054215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.054225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.054500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.054510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.054692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.054704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.054751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.054764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.055058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.055070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.055257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.055268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.055481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.055493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.055829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.055841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.056159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.056170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.056340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.056350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.056672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.056683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.056968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.056980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.057320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.057331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.057518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.057529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.057857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.057873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.058221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.058233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.058561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.058573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.058767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.058777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.059102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.059113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.059416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.059427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.059705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.059716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.059931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.059943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.060200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.060211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.060530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.060541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.060797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.060808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.061114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.061125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.061460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.061472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.061519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.061530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.061826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.061837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.062185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.062198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.062384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.062398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.062573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.062584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.062894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.062905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.063233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.063244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.063578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.063589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.089 qpair failed and we were unable to recover it. 00:30:05.089 [2024-12-06 11:29:11.063764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.089 [2024-12-06 11:29:11.063777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.064100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.064112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.064448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.064460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.064640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.064652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.064961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.064973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.065256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.065267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.065577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.065589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.065914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.065925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.066219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.066230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.066487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.066499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.066675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.066687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.066860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.066891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.067195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.067207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.067542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.067553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.067884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.067897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.068192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.068203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.068545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.068556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.068867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.068879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.069081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.069092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.069143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.069152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.069317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.069327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.069609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.069621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.069936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.069948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.070129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.070140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.070467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.070478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.090 [2024-12-06 11:29:11.070805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.070817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.090 [2024-12-06 11:29:11.071155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.071167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.090 [2024-12-06 11:29:11.071349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.071364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.071532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.071544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.090 [2024-12-06 11:29:11.071823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.071835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.072173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.072185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.072486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.090 [2024-12-06 11:29:11.072498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.090 qpair failed and we were unable to recover it. 00:30:05.090 [2024-12-06 11:29:11.072684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.072696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.073013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.073024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.073358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.073369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.073673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.073684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.073859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.073876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.074180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.074192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.074458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.074469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.074768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.074779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.075102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.075114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.075427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.075438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.075725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.075737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.076055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.076067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.076270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.076281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.076548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.076560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.076870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.076882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.077197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.077208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.077526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.077537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.077584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.077593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.077633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.077642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.077933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.077945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.078268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.078279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.078452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.078464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.078780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.078793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.079100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.079111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.079291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.079303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.079603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.079614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.079920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.079931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.080242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.080253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.080462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.080473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.080786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.080797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.081000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.081012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.081205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.081216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.081541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.081552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.081718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.081730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.082021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.082032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.091 [2024-12-06 11:29:11.082211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.091 [2024-12-06 11:29:11.082223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.091 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.082543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.082555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.082873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.082885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.083227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.083238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.083526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.083537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.083843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.083854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.084195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.084206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.084481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.084491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.084807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.084818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.085146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.085158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.085353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.085363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.085661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.085673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.085948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.085959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.086267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.086278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.086585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.086599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.086793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.086806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.087106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.087117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.087422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.087432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.087616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.087627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.087794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.087805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.088092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.088104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.088435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.088446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.088734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.088745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.089033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.089044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.089208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.089218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.089409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.089420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.089738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.089748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.089946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.089958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.090271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.090283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.090435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.090445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.090765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.090776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.090966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.090978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.091302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.091312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.091490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.091501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.091680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.091692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.092002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.092013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.092 [2024-12-06 11:29:11.092319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.092 [2024-12-06 11:29:11.092330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.092 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.092615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.092626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.092934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.092945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.093290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.093301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.093642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.093654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.093976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.093988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.094303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.094314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.094650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.094662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.095031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.095042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.095355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.095366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.095665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.095677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.095875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.095887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.096186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.096197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.096488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.096498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.096663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.096673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.096835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.096846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.097175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.097188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.097481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.097493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.097823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.097834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.098138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.098150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.098341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.098352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.098680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.098691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.099047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.099060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.099271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.099282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.099612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.099623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.099959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.099971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.100309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.100320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.100629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.100640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.100983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.100994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.101331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.101342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.101656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.101668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.101981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.101993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.102174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.102186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.102516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.102527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.102835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.102847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.103048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.103059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.103382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.103393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.103575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.103586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 [2024-12-06 11:29:11.103907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.093 [2024-12-06 11:29:11.103919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.093 qpair failed and we were unable to recover it. 00:30:05.093 Malloc0 00:30:05.093 [2024-12-06 11:29:11.104216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.104227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.094 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:05.094 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.094 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.094 [2024-12-06 11:29:11.104539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.104550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.104749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.104761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.104960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.104971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.105248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.105259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.105494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.105507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.105847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.105859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.106046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.106058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.106324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.106335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.106401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.106410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.106560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.106571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.106909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.106923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.107215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.107225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.107527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.107538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.107565] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.094 [2024-12-06 11:29:11.107852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.107868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.107916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.107927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.108217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.108228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.108516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.108527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.108904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.108916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.109194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.109205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.109537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.109548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.109857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.109872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.110178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.110189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.110480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.110491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.110691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.110702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.111012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.111023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.111327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.111337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.111460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.111471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.111768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.111779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.111973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.111985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.112325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.112336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.112511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.112522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.112803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.112817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.113227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.113239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.113542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.113552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.113891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.094 [2024-12-06 11:29:11.113910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.094 qpair failed and we were unable to recover it. 00:30:05.094 [2024-12-06 11:29:11.114213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.114224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.114495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.114507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.114808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.114819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.115108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.115119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.115430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.115440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.115634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.115645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.115957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.115968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.116167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.116179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.116402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.116414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.116598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.095 [2024-12-06 11:29:11.116609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.116825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.116836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.095 [2024-12-06 11:29:11.117041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.117053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.117214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.117224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.095 [2024-12-06 11:29:11.117553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.117565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.095 [2024-12-06 11:29:11.117877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.117889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.118238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.118249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.118575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.118586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.118891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.118902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.119082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.119093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.119410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.119421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.119496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.119505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.119786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.119799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.120126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.120137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.120468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.120479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.120528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.120537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.120706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.120716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.121003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.121014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.121345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.121356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.121658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.121669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.121744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.121753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.122039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.122050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.122362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.122373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.095 [2024-12-06 11:29:11.122679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.095 [2024-12-06 11:29:11.122691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.095 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.123085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.123096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.123397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.123409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.123596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.123608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.123784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.123795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.123988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.124001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.124304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.124315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.124649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.124659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.124846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.124857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.125063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.125075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.125398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.125410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.125681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.125692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.126005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.126016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.126323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.126334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.126650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.126661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.126978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.126990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.127266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.127277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.127463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.127473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.127803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.127814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.128152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.128163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.128484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.128495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.096 [2024-12-06 11:29:11.128854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.128868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.096 [2024-12-06 11:29:11.129189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.129200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.096 [2024-12-06 11:29:11.129447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.129459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.096 [2024-12-06 11:29:11.129645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.129656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.129980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.129992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.130328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.130340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.130629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.130640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.130977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.130990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.131325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.131335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.131529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.131541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.131630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.131641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.131983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.131994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.132166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.132177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.132473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.132484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.132785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.132796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.133123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.133134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.096 qpair failed and we were unable to recover it. 00:30:05.096 [2024-12-06 11:29:11.133308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.096 [2024-12-06 11:29:11.133320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.133536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.133547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.133840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.133851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.134284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.134296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.134478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.134489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.134813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.134824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.135117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.135129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.135460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.135471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.135805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.135816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.136007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.136018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.136334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.136346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.136622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.136633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.136815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.136826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.137123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.137134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.137306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.137318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.137647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.137658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.137967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.137979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.138276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.138286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.138596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.138609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.138949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.138960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.139158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.139171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.139482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.139493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.139685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.139696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.139924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.139936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.140245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.140256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.140594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.140605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.097 [2024-12-06 11:29:11.140921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.140933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.097 [2024-12-06 11:29:11.141129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.141140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.097 [2024-12-06 11:29:11.141466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.141476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.097 [2024-12-06 11:29:11.141795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.141806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.142144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.142156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.142313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.142325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.142507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.142519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.142806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.142817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.143203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.143215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.143531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.143541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.143849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.097 [2024-12-06 11:29:11.143860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.097 qpair failed and we were unable to recover it. 00:30:05.097 [2024-12-06 11:29:11.144186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.144197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.144519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.144531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.144706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.144717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.145036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.145047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.145365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.145376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.145569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.145582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.145879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.145890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.145940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.145949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.146274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.146285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.146597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.146608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.146939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.146951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.147236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.147247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.147561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.098 [2024-12-06 11:29:11.147572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239f490 with addr=10.0.0.2, port=4420 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.147824] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.098 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.098 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.098 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.098 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.098 [2024-12-06 11:29:11.158518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.098 [2024-12-06 11:29:11.158592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.098 [2024-12-06 11:29:11.158610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.098 [2024-12-06 11:29:11.158618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.098 [2024-12-06 11:29:11.158626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.098 [2024-12-06 11:29:11.158646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.098 11:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3628892 00:30:05.098 [2024-12-06 11:29:11.168443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.098 [2024-12-06 11:29:11.168506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.098 [2024-12-06 11:29:11.168524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.098 [2024-12-06 11:29:11.168532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.098 [2024-12-06 11:29:11.168539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.098 [2024-12-06 11:29:11.168554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.178429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.098 [2024-12-06 11:29:11.178485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.098 [2024-12-06 11:29:11.178499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.098 [2024-12-06 11:29:11.178507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.098 [2024-12-06 11:29:11.178513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.098 [2024-12-06 11:29:11.178527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.188474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.098 [2024-12-06 11:29:11.188539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.098 [2024-12-06 11:29:11.188553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.098 [2024-12-06 11:29:11.188560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.098 [2024-12-06 11:29:11.188567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.098 [2024-12-06 11:29:11.188581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.198422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.098 [2024-12-06 11:29:11.198488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.098 [2024-12-06 11:29:11.198513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.098 [2024-12-06 11:29:11.198522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.098 [2024-12-06 11:29:11.198530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.098 [2024-12-06 11:29:11.198549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.208414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.098 [2024-12-06 11:29:11.208473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.098 [2024-12-06 11:29:11.208490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.098 [2024-12-06 11:29:11.208497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.098 [2024-12-06 11:29:11.208512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.098 [2024-12-06 11:29:11.208529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.218371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.098 [2024-12-06 11:29:11.218467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.098 [2024-12-06 11:29:11.218481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.098 [2024-12-06 11:29:11.218489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.098 [2024-12-06 11:29:11.218496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.098 [2024-12-06 11:29:11.218510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.098 [2024-12-06 11:29:11.228377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.098 [2024-12-06 11:29:11.228437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.098 [2024-12-06 11:29:11.228451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.098 [2024-12-06 11:29:11.228458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.098 [2024-12-06 11:29:11.228465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.098 [2024-12-06 11:29:11.228479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.098 qpair failed and we were unable to recover it. 00:30:05.361 [2024-12-06 11:29:11.238530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.238583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.238597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.238604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.238611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.238626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.248541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.248596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.248610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.248617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.248624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.248638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.258534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.258590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.258615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.258624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.258631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.258652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.268537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.268595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.268611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.268619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.268626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.268641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.278601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.278660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.278674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.278681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.278688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.278703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.288655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.288713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.288726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.288734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.288740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.288754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.298650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.298703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.298721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.298729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.298736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.298750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.308690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.308749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.308763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.308770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.308777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.308791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.318730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.318795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.318809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.318817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.318824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.318838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.328739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.328792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.328806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.328813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.328820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.328834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.338747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.338810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.338824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.338832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.338843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.338857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.348795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.348859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.348877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.348884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.348891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.348905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.358717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.358783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.358796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.358804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.358811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.358825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.368853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.368914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.368930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.368937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.368944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.368959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.378870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.378927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.378941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.378949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.378955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.378969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.389017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.389083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.389097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.389104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.389111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.389125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.399017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.399080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.399094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.399101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.399108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.399122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.409022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.409080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.409094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.409101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.409108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.409122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.419098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.419153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.419166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.419174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.419181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.419194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.429017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.429114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.429131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.362 [2024-12-06 11:29:11.429140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.362 [2024-12-06 11:29:11.429146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.362 [2024-12-06 11:29:11.429161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.362 qpair failed and we were unable to recover it. 00:30:05.362 [2024-12-06 11:29:11.439054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.362 [2024-12-06 11:29:11.439110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.362 [2024-12-06 11:29:11.439123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.363 [2024-12-06 11:29:11.439131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.363 [2024-12-06 11:29:11.439138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.363 [2024-12-06 11:29:11.439151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.363 qpair failed and we were unable to recover it. 00:30:05.363 [2024-12-06 11:29:11.449055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.363 [2024-12-06 11:29:11.449108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.363 [2024-12-06 11:29:11.449122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.363 [2024-12-06 11:29:11.449129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.363 [2024-12-06 11:29:11.449136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.363 [2024-12-06 11:29:11.449150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.363 qpair failed and we were unable to recover it. 00:30:05.363 [2024-12-06 11:29:11.459153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.363 [2024-12-06 11:29:11.459209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.363 [2024-12-06 11:29:11.459223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.363 [2024-12-06 11:29:11.459230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.363 [2024-12-06 11:29:11.459237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.363 [2024-12-06 11:29:11.459250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.363 qpair failed and we were unable to recover it. 00:30:05.363 [2024-12-06 11:29:11.469166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.363 [2024-12-06 11:29:11.469222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.363 [2024-12-06 11:29:11.469236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.363 [2024-12-06 11:29:11.469243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.363 [2024-12-06 11:29:11.469254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.363 [2024-12-06 11:29:11.469268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.363 qpair failed and we were unable to recover it. 00:30:05.363 [2024-12-06 11:29:11.479181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.363 [2024-12-06 11:29:11.479237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.363 [2024-12-06 11:29:11.479253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.363 [2024-12-06 11:29:11.479261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.363 [2024-12-06 11:29:11.479269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.363 [2024-12-06 11:29:11.479287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.363 qpair failed and we were unable to recover it. 00:30:05.363 [2024-12-06 11:29:11.489213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.363 [2024-12-06 11:29:11.489268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.363 [2024-12-06 11:29:11.489282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.363 [2024-12-06 11:29:11.489290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.363 [2024-12-06 11:29:11.489297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.363 [2024-12-06 11:29:11.489311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.363 qpair failed and we were unable to recover it. 00:30:05.363 [2024-12-06 11:29:11.499223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.363 [2024-12-06 11:29:11.499282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.363 [2024-12-06 11:29:11.499295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.363 [2024-12-06 11:29:11.499303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.363 [2024-12-06 11:29:11.499309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.363 [2024-12-06 11:29:11.499323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.363 qpair failed and we were unable to recover it. 00:30:05.363 [2024-12-06 11:29:11.509253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.363 [2024-12-06 11:29:11.509312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.363 [2024-12-06 11:29:11.509326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.363 [2024-12-06 11:29:11.509333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.363 [2024-12-06 11:29:11.509340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.363 [2024-12-06 11:29:11.509353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.363 qpair failed and we were unable to recover it. 00:30:05.363 [2024-12-06 11:29:11.519175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.363 [2024-12-06 11:29:11.519239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.363 [2024-12-06 11:29:11.519254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.363 [2024-12-06 11:29:11.519262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.363 [2024-12-06 11:29:11.519269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.363 [2024-12-06 11:29:11.519283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.363 qpair failed and we were unable to recover it. 00:30:05.625 [2024-12-06 11:29:11.529309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.625 [2024-12-06 11:29:11.529360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.625 [2024-12-06 11:29:11.529375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.625 [2024-12-06 11:29:11.529383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.625 [2024-12-06 11:29:11.529390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.625 [2024-12-06 11:29:11.529405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-12-06 11:29:11.539342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.625 [2024-12-06 11:29:11.539392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.625 [2024-12-06 11:29:11.539409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.625 [2024-12-06 11:29:11.539416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.625 [2024-12-06 11:29:11.539423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.625 [2024-12-06 11:29:11.539437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-12-06 11:29:11.549317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.625 [2024-12-06 11:29:11.549371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.625 [2024-12-06 11:29:11.549385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.625 [2024-12-06 11:29:11.549392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.625 [2024-12-06 11:29:11.549398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.625 [2024-12-06 11:29:11.549412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-12-06 11:29:11.559403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.625 [2024-12-06 11:29:11.559459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.625 [2024-12-06 11:29:11.559476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.625 [2024-12-06 11:29:11.559483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.625 [2024-12-06 11:29:11.559490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.625 [2024-12-06 11:29:11.559504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-12-06 11:29:11.569280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.625 [2024-12-06 11:29:11.569345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.625 [2024-12-06 11:29:11.569359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.625 [2024-12-06 11:29:11.569367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.625 [2024-12-06 11:29:11.569373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.625 [2024-12-06 11:29:11.569387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-12-06 11:29:11.579409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.625 [2024-12-06 11:29:11.579461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.625 [2024-12-06 11:29:11.579474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.625 [2024-12-06 11:29:11.579482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.625 [2024-12-06 11:29:11.579488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.625 [2024-12-06 11:29:11.579502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-12-06 11:29:11.589473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.625 [2024-12-06 11:29:11.589524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.625 [2024-12-06 11:29:11.589537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.625 [2024-12-06 11:29:11.589545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.625 [2024-12-06 11:29:11.589551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.625 [2024-12-06 11:29:11.589565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-12-06 11:29:11.599508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.625 [2024-12-06 11:29:11.599562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.625 [2024-12-06 11:29:11.599575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.625 [2024-12-06 11:29:11.599583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.625 [2024-12-06 11:29:11.599593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.625 [2024-12-06 11:29:11.599607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-12-06 11:29:11.609398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.609457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.609471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.609478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.609485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.609499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.619568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.619642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.619656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.619663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.619670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.619685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.629593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.629660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.629686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.629695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.629703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.629723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.639624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.639713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.639728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.639736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.639743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.639758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.649513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.649565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.649579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.649586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.649593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.649607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.659538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.659606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.659619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.659627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.659634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.659648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.669694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.669776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.669790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.669798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.669804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.669818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.679716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.679808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.679823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.679830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.679837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.679851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.689795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.689869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.689887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.689894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.689902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.689916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.699781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.699835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.699849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.699856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.699867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.699882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.709685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.709746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.709760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.709768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.709774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.709789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.719823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.719893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.719907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.719915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.719922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.719936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.729834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.729884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.729898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.626 [2024-12-06 11:29:11.729906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.626 [2024-12-06 11:29:11.729915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.626 [2024-12-06 11:29:11.729929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-12-06 11:29:11.739911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.626 [2024-12-06 11:29:11.739965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.626 [2024-12-06 11:29:11.739979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.627 [2024-12-06 11:29:11.739986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.627 [2024-12-06 11:29:11.739993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.627 [2024-12-06 11:29:11.740007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-12-06 11:29:11.749917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.627 [2024-12-06 11:29:11.749976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.627 [2024-12-06 11:29:11.749990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.627 [2024-12-06 11:29:11.749997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.627 [2024-12-06 11:29:11.750004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.627 [2024-12-06 11:29:11.750017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-12-06 11:29:11.759824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.627 [2024-12-06 11:29:11.759883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.627 [2024-12-06 11:29:11.759897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.627 [2024-12-06 11:29:11.759904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.627 [2024-12-06 11:29:11.759910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.627 [2024-12-06 11:29:11.759925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-12-06 11:29:11.769937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.627 [2024-12-06 11:29:11.769989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.627 [2024-12-06 11:29:11.770003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.627 [2024-12-06 11:29:11.770010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.627 [2024-12-06 11:29:11.770018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.627 [2024-12-06 11:29:11.770031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-12-06 11:29:11.780017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.627 [2024-12-06 11:29:11.780111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.627 [2024-12-06 11:29:11.780126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.627 [2024-12-06 11:29:11.780134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.627 [2024-12-06 11:29:11.780140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.627 [2024-12-06 11:29:11.780154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-12-06 11:29:11.790019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.889 [2024-12-06 11:29:11.790072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.889 [2024-12-06 11:29:11.790087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.889 [2024-12-06 11:29:11.790096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.889 [2024-12-06 11:29:11.790103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.889 [2024-12-06 11:29:11.790117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.889 qpair failed and we were unable to recover it. 00:30:05.889 [2024-12-06 11:29:11.800063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.800124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.800138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.800145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.800152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.800166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.810066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.810117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.810130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.810138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.810144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.810158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.820137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.820220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.820236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.820245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.820252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.820265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.830158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.830235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.830248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.830256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.830262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.830276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.840164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.840217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.840230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.840238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.840244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.840258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.850202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.850263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.850278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.850285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.850293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.850310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.860223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.860280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.860294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.860301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.860315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.860329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.870250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.870309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.870323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.870330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.870337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.870351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.880281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.880343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.880357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.880364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.880370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.880384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.890305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.890359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.890372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.890379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.890385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.890399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.900312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.900368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.900382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.900389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.900396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.900410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.910353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.910413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.910427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.910435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.910441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.910455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.890 [2024-12-06 11:29:11.920282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.890 [2024-12-06 11:29:11.920344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.890 [2024-12-06 11:29:11.920359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.890 [2024-12-06 11:29:11.920367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.890 [2024-12-06 11:29:11.920373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.890 [2024-12-06 11:29:11.920388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.890 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:11.930365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:11.930417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:11.930431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:11.930439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:11.930446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:11.930459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:11.940350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:11.940452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:11.940467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:11.940475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:11.940482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:11.940496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:11.950468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:11.950526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:11.950543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:11.950550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:11.950557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:11.950570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:11.960488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:11.960546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:11.960559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:11.960567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:11.960573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:11.960587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:11.970493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:11.970557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:11.970583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:11.970592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:11.970600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:11.970620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:11.980539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:11.980637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:11.980653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:11.980661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:11.980668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:11.980683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:11.990579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:11.990642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:11.990656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:11.990664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:11.990676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:11.990690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:12.000478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:12.000544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:12.000559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:12.000567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:12.000574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:12.000588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:12.010633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:12.010692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:12.010706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:12.010713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:12.010720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:12.010733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:12.020647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:12.020700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:12.020714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:12.020721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:12.020728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:12.020741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:12.030684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:12.030746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:12.030760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:12.030767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:12.030774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:12.030787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:12.040589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:12.040646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:12.040661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:12.040668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:12.040675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:12.040689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:05.891 [2024-12-06 11:29:12.050733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.891 [2024-12-06 11:29:12.050805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.891 [2024-12-06 11:29:12.050819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.891 [2024-12-06 11:29:12.050826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.891 [2024-12-06 11:29:12.050833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:05.891 [2024-12-06 11:29:12.050847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:05.891 qpair failed and we were unable to recover it. 00:30:06.153 [2024-12-06 11:29:12.060648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.153 [2024-12-06 11:29:12.060710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.153 [2024-12-06 11:29:12.060726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.153 [2024-12-06 11:29:12.060734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.153 [2024-12-06 11:29:12.060741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.153 [2024-12-06 11:29:12.060756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-12-06 11:29:12.070827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.153 [2024-12-06 11:29:12.070903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.153 [2024-12-06 11:29:12.070917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.153 [2024-12-06 11:29:12.070925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.153 [2024-12-06 11:29:12.070932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.153 [2024-12-06 11:29:12.070947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-12-06 11:29:12.080816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.153 [2024-12-06 11:29:12.080872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.153 [2024-12-06 11:29:12.080890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.153 [2024-12-06 11:29:12.080897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.153 [2024-12-06 11:29:12.080904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.153 [2024-12-06 11:29:12.080918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-12-06 11:29:12.090838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.153 [2024-12-06 11:29:12.090897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.153 [2024-12-06 11:29:12.090911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.153 [2024-12-06 11:29:12.090918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.153 [2024-12-06 11:29:12.090925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.153 [2024-12-06 11:29:12.090939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.153 qpair failed and we were unable to recover it. 00:30:06.153 [2024-12-06 11:29:12.100852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.153 [2024-12-06 11:29:12.100908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.153 [2024-12-06 11:29:12.100922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.153 [2024-12-06 11:29:12.100930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.100936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.100950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.110908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.110964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.110977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.110985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.110992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.111006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.120915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.120970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.120984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.120991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.121001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.121016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.130848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.130905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.130918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.130926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.130933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.130947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.140982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.141039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.141052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.141060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.141066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.141080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.150973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.151046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.151060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.151067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.151074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.151089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.161052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.161111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.161124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.161131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.161137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.161151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.171046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.171142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.171157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.171165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.171172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.171186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.181089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.181139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.181152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.181159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.181166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.181180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.191019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.191113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.191127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.191135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.191141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.191155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.201156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.201210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.201224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.201231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.201238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.201251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.211159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.211216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.211233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.211240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.211247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.211260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.221070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.221128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.221142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.221150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.221156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.154 [2024-12-06 11:29:12.221171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.154 qpair failed and we were unable to recover it. 00:30:06.154 [2024-12-06 11:29:12.231211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.154 [2024-12-06 11:29:12.231270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.154 [2024-12-06 11:29:12.231284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.154 [2024-12-06 11:29:12.231292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.154 [2024-12-06 11:29:12.231298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.155 [2024-12-06 11:29:12.231312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-12-06 11:29:12.241126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.155 [2024-12-06 11:29:12.241193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.155 [2024-12-06 11:29:12.241207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.155 [2024-12-06 11:29:12.241215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.155 [2024-12-06 11:29:12.241221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.155 [2024-12-06 11:29:12.241235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-12-06 11:29:12.251289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.155 [2024-12-06 11:29:12.251341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.155 [2024-12-06 11:29:12.251355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.155 [2024-12-06 11:29:12.251366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.155 [2024-12-06 11:29:12.251373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.155 [2024-12-06 11:29:12.251387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-12-06 11:29:12.261306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.155 [2024-12-06 11:29:12.261360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.155 [2024-12-06 11:29:12.261374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.155 [2024-12-06 11:29:12.261382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.155 [2024-12-06 11:29:12.261389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.155 [2024-12-06 11:29:12.261402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-12-06 11:29:12.271302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.155 [2024-12-06 11:29:12.271399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.155 [2024-12-06 11:29:12.271414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.155 [2024-12-06 11:29:12.271421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.155 [2024-12-06 11:29:12.271428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.155 [2024-12-06 11:29:12.271442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-12-06 11:29:12.281246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.155 [2024-12-06 11:29:12.281342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.155 [2024-12-06 11:29:12.281356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.155 [2024-12-06 11:29:12.281364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.155 [2024-12-06 11:29:12.281370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.155 [2024-12-06 11:29:12.281384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-12-06 11:29:12.291391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.155 [2024-12-06 11:29:12.291443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.155 [2024-12-06 11:29:12.291456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.155 [2024-12-06 11:29:12.291463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.155 [2024-12-06 11:29:12.291470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.155 [2024-12-06 11:29:12.291484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-12-06 11:29:12.301417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.155 [2024-12-06 11:29:12.301517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.155 [2024-12-06 11:29:12.301531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.155 [2024-12-06 11:29:12.301539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.155 [2024-12-06 11:29:12.301545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.155 [2024-12-06 11:29:12.301559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.155 [2024-12-06 11:29:12.311448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.155 [2024-12-06 11:29:12.311502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.155 [2024-12-06 11:29:12.311516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.155 [2024-12-06 11:29:12.311523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.155 [2024-12-06 11:29:12.311530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.155 [2024-12-06 11:29:12.311544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.155 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.321488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.321584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.321598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.321605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.321612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.321625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.331515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.331570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.331584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.331591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.331598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.331612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.341503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.341556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.341573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.341580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.341587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.341601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.351520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.351572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.351586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.351593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.351600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.351614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.361588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.361643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.361659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.361666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.361673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.361691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.371504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.371606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.371623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.371631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.371637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.371652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.381624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.381683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.381708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.381722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.381730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.381749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.391668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.391724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.391740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.391747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.391754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.391769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.401697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.401757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.401771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.401779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.401786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.401800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.411690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.411755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.411769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.411777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.411786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.411801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.421747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.421807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.421820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.421828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.421834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.421849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.431807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.431891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.431905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.431913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.431920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.431934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.441809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.441874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.441889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.441897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.441904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.441918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.451697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.451752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.451768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.451775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.451782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.451796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.461852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.461957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.461972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.461979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.461986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.462000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.471889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.471949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.471967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.471975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.471981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.471995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.481907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.481969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.481984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.481994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.482002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.482017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.491799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.491853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.491872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.491879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.491886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.491901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.501945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.502046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.502060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.418 [2024-12-06 11:29:12.502068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.418 [2024-12-06 11:29:12.502074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.418 [2024-12-06 11:29:12.502088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.418 qpair failed and we were unable to recover it. 00:30:06.418 [2024-12-06 11:29:12.511984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.418 [2024-12-06 11:29:12.512040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.418 [2024-12-06 11:29:12.512054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.419 [2024-12-06 11:29:12.512069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.419 [2024-12-06 11:29:12.512076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.419 [2024-12-06 11:29:12.512090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-12-06 11:29:12.522038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.419 [2024-12-06 11:29:12.522091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.419 [2024-12-06 11:29:12.522105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.419 [2024-12-06 11:29:12.522114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.419 [2024-12-06 11:29:12.522121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.419 [2024-12-06 11:29:12.522135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-12-06 11:29:12.532045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.419 [2024-12-06 11:29:12.532100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.419 [2024-12-06 11:29:12.532114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.419 [2024-12-06 11:29:12.532122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.419 [2024-12-06 11:29:12.532128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.419 [2024-12-06 11:29:12.532142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-12-06 11:29:12.542045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.419 [2024-12-06 11:29:12.542098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.419 [2024-12-06 11:29:12.542111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.419 [2024-12-06 11:29:12.542119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.419 [2024-12-06 11:29:12.542125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.419 [2024-12-06 11:29:12.542139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-12-06 11:29:12.552100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.419 [2024-12-06 11:29:12.552157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.419 [2024-12-06 11:29:12.552170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.419 [2024-12-06 11:29:12.552177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.419 [2024-12-06 11:29:12.552184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.419 [2024-12-06 11:29:12.552198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-12-06 11:29:12.562137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.419 [2024-12-06 11:29:12.562191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.419 [2024-12-06 11:29:12.562206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.419 [2024-12-06 11:29:12.562213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.419 [2024-12-06 11:29:12.562220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.419 [2024-12-06 11:29:12.562234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-12-06 11:29:12.572176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.419 [2024-12-06 11:29:12.572237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.419 [2024-12-06 11:29:12.572252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.419 [2024-12-06 11:29:12.572260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.419 [2024-12-06 11:29:12.572266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.419 [2024-12-06 11:29:12.572281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.419 [2024-12-06 11:29:12.582065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.419 [2024-12-06 11:29:12.582120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.419 [2024-12-06 11:29:12.582135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.419 [2024-12-06 11:29:12.582142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.419 [2024-12-06 11:29:12.582149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.419 [2024-12-06 11:29:12.582162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.419 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.592212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.592268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.592282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.592290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.682 [2024-12-06 11:29:12.592296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.682 [2024-12-06 11:29:12.592310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.682 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.602251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.602311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.602325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.602333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.682 [2024-12-06 11:29:12.602339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.682 [2024-12-06 11:29:12.602353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.682 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.612274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.612329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.612343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.612350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.682 [2024-12-06 11:29:12.612357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.682 [2024-12-06 11:29:12.612371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.682 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.622318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.622417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.622431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.622439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.682 [2024-12-06 11:29:12.622446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.682 [2024-12-06 11:29:12.622459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.682 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.632335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.632394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.632408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.632415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.682 [2024-12-06 11:29:12.632422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.682 [2024-12-06 11:29:12.632436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.682 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.642229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.642299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.642313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.642324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.682 [2024-12-06 11:29:12.642331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.682 [2024-12-06 11:29:12.642347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.682 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.652385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.652438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.652452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.652459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.682 [2024-12-06 11:29:12.652466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.682 [2024-12-06 11:29:12.652480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.682 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.662373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.662431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.662445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.662452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.682 [2024-12-06 11:29:12.662459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.682 [2024-12-06 11:29:12.662472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.682 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.672447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.672508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.672522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.672529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.682 [2024-12-06 11:29:12.672536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.682 [2024-12-06 11:29:12.672550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.682 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.682466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.682563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.682577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.682585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.682 [2024-12-06 11:29:12.682591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.682 [2024-12-06 11:29:12.682605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.682 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.692475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.692529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.692543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.692550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.682 [2024-12-06 11:29:12.692557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.682 [2024-12-06 11:29:12.692571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.682 qpair failed and we were unable to recover it. 00:30:06.682 [2024-12-06 11:29:12.702496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.682 [2024-12-06 11:29:12.702550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.682 [2024-12-06 11:29:12.702563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.682 [2024-12-06 11:29:12.702571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.702578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.702592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.712541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.712595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.712609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.712617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.712623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.712637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.722571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.722635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.722660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.722670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.722678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.722698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.732585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.732645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.732671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.732680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.732688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.732707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.742641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.742702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.742718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.742726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.742733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.742747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.752532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.752589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.752603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.752610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.752617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.752631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.762667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.762731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.762745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.762752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.762759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.762773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.772709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.772765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.772780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.772792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.772799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.772813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.782727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.782786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.782800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.782807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.782814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.782828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.792740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.792836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.792851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.792858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.792870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.792884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.802793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.802847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.802860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.802872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.802878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.802892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.812802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.812859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.812876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.812884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.812890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.812904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.822833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.822884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.822898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.822905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.822912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.683 [2024-12-06 11:29:12.822926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.683 qpair failed and we were unable to recover it. 00:30:06.683 [2024-12-06 11:29:12.832877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.683 [2024-12-06 11:29:12.832936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.683 [2024-12-06 11:29:12.832949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.683 [2024-12-06 11:29:12.832957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.683 [2024-12-06 11:29:12.832963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.684 [2024-12-06 11:29:12.832977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.684 qpair failed and we were unable to recover it. 00:30:06.684 [2024-12-06 11:29:12.842907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.684 [2024-12-06 11:29:12.842965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.684 [2024-12-06 11:29:12.842979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.684 [2024-12-06 11:29:12.842986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.684 [2024-12-06 11:29:12.842993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.684 [2024-12-06 11:29:12.843007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.684 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.852941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.852997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.853010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.853018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.853024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.853038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.862973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.863042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.863056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.863064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.863070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.863084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.872877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.872939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.872952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.872960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.872966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.872980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.883018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.883073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.883086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.883094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.883100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.883114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.893040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.893094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.893108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.893115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.893122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.893136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.903068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.903123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.903137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.903148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.903155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.903169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.913010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.913063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.913077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.913084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.913091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.913105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.923146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.923199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.923213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.923220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.923227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.923240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.933032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.933088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.933102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.933109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.933116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.933129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.943181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.943238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.943252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.943260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.943266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.943283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.953224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.953307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.953321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.953328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.953335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.953349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.963250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.963306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.963320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.963328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.963335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.963349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.973304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.973360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.973374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.973381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.973388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.973402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.983287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.983341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.983356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.983363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.983370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.983384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:12.993312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:12.993396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:12.993409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:12.993417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:12.993423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:12.993437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:13.003237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:13.003297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:13.003312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:13.003319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:13.003326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:13.003340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:13.013314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:13.013370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:13.013384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:13.013392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:13.013399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:13.013413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:13.023278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:13.023329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:13.023343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:13.023350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:13.023357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:13.023371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.947 qpair failed and we were unable to recover it. 00:30:06.947 [2024-12-06 11:29:13.033322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.947 [2024-12-06 11:29:13.033420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.947 [2024-12-06 11:29:13.033434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.947 [2024-12-06 11:29:13.033445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.947 [2024-12-06 11:29:13.033452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.947 [2024-12-06 11:29:13.033466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.948 qpair failed and we were unable to recover it. 00:30:06.948 [2024-12-06 11:29:13.043473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.948 [2024-12-06 11:29:13.043528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.948 [2024-12-06 11:29:13.043542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.948 [2024-12-06 11:29:13.043549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.948 [2024-12-06 11:29:13.043556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.948 [2024-12-06 11:29:13.043569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.948 qpair failed and we were unable to recover it. 00:30:06.948 [2024-12-06 11:29:13.053365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.948 [2024-12-06 11:29:13.053420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.948 [2024-12-06 11:29:13.053433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.948 [2024-12-06 11:29:13.053441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.948 [2024-12-06 11:29:13.053447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.948 [2024-12-06 11:29:13.053461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.948 qpair failed and we were unable to recover it. 00:30:06.948 [2024-12-06 11:29:13.063513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.948 [2024-12-06 11:29:13.063573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.948 [2024-12-06 11:29:13.063586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.948 [2024-12-06 11:29:13.063594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.948 [2024-12-06 11:29:13.063601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.948 [2024-12-06 11:29:13.063615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.948 qpair failed and we were unable to recover it. 00:30:06.948 [2024-12-06 11:29:13.073536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.948 [2024-12-06 11:29:13.073622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.948 [2024-12-06 11:29:13.073636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.948 [2024-12-06 11:29:13.073645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.948 [2024-12-06 11:29:13.073651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.948 [2024-12-06 11:29:13.073669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.948 qpair failed and we were unable to recover it. 00:30:06.948 [2024-12-06 11:29:13.083570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.948 [2024-12-06 11:29:13.083631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.948 [2024-12-06 11:29:13.083644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.948 [2024-12-06 11:29:13.083652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.948 [2024-12-06 11:29:13.083659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.948 [2024-12-06 11:29:13.083673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.948 qpair failed and we were unable to recover it. 00:30:06.948 [2024-12-06 11:29:13.093593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.948 [2024-12-06 11:29:13.093644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.948 [2024-12-06 11:29:13.093657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.948 [2024-12-06 11:29:13.093665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.948 [2024-12-06 11:29:13.093672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.948 [2024-12-06 11:29:13.093685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.948 qpair failed and we were unable to recover it. 00:30:06.948 [2024-12-06 11:29:13.103660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.948 [2024-12-06 11:29:13.103736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.948 [2024-12-06 11:29:13.103750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.948 [2024-12-06 11:29:13.103758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.948 [2024-12-06 11:29:13.103765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:06.948 [2024-12-06 11:29:13.103779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.948 qpair failed and we were unable to recover it. 00:30:07.211 [2024-12-06 11:29:13.113530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.211 [2024-12-06 11:29:13.113588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.211 [2024-12-06 11:29:13.113601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.211 [2024-12-06 11:29:13.113609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.211 [2024-12-06 11:29:13.113615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.211 [2024-12-06 11:29:13.113629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.211 qpair failed and we were unable to recover it. 00:30:07.211 [2024-12-06 11:29:13.123694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.211 [2024-12-06 11:29:13.123758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.211 [2024-12-06 11:29:13.123772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.211 [2024-12-06 11:29:13.123779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.211 [2024-12-06 11:29:13.123786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.211 [2024-12-06 11:29:13.123800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.211 qpair failed and we were unable to recover it. 00:30:07.211 [2024-12-06 11:29:13.133710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.211 [2024-12-06 11:29:13.133764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.211 [2024-12-06 11:29:13.133777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.211 [2024-12-06 11:29:13.133785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.211 [2024-12-06 11:29:13.133791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.211 [2024-12-06 11:29:13.133805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.211 qpair failed and we were unable to recover it. 00:30:07.211 [2024-12-06 11:29:13.143734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.211 [2024-12-06 11:29:13.143793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.211 [2024-12-06 11:29:13.143807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.211 [2024-12-06 11:29:13.143814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.211 [2024-12-06 11:29:13.143821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.211 [2024-12-06 11:29:13.143835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.211 qpair failed and we were unable to recover it. 00:30:07.211 [2024-12-06 11:29:13.153660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.211 [2024-12-06 11:29:13.153759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.211 [2024-12-06 11:29:13.153773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.211 [2024-12-06 11:29:13.153781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.211 [2024-12-06 11:29:13.153788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.211 [2024-12-06 11:29:13.153802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.211 qpair failed and we were unable to recover it. 00:30:07.211 [2024-12-06 11:29:13.163847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.211 [2024-12-06 11:29:13.163927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.211 [2024-12-06 11:29:13.163940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.211 [2024-12-06 11:29:13.163951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.211 [2024-12-06 11:29:13.163958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.211 [2024-12-06 11:29:13.163973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.211 qpair failed and we were unable to recover it. 00:30:07.211 [2024-12-06 11:29:13.173787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.211 [2024-12-06 11:29:13.173853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.211 [2024-12-06 11:29:13.173872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.211 [2024-12-06 11:29:13.173880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.211 [2024-12-06 11:29:13.173886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.211 [2024-12-06 11:29:13.173900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.211 qpair failed and we were unable to recover it. 00:30:07.211 [2024-12-06 11:29:13.183831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.211 [2024-12-06 11:29:13.183914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.211 [2024-12-06 11:29:13.183928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.211 [2024-12-06 11:29:13.183935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.211 [2024-12-06 11:29:13.183943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.211 [2024-12-06 11:29:13.183957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.211 qpair failed and we were unable to recover it. 00:30:07.211 [2024-12-06 11:29:13.193879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.211 [2024-12-06 11:29:13.193970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.211 [2024-12-06 11:29:13.193985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.211 [2024-12-06 11:29:13.193993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.211 [2024-12-06 11:29:13.194000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.211 [2024-12-06 11:29:13.194015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.211 qpair failed and we were unable to recover it. 00:30:07.211 [2024-12-06 11:29:13.203908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.211 [2024-12-06 11:29:13.203966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.211 [2024-12-06 11:29:13.203980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.203988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.203994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.204012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.213896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.213949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.213963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.213970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.213977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.213990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.223831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.223891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.223905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.223913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.223919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.223933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.233977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.234032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.234046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.234053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.234060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.234073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.244030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.244088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.244102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.244109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.244116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.244129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.254005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.254071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.254087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.254094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.254105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.254120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.264043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.264099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.264113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.264121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.264128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.264142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.274105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.274165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.274179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.274188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.274195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.274210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.284166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.284221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.284235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.284242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.284249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.284262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.294149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.294205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.294218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.294229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.294235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.294249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.304153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.304207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.304221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.304228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.304234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.304248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.314201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.314294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.314308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.314315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.314322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.314335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.324112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.324208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.324221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.212 [2024-12-06 11:29:13.324228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.212 [2024-12-06 11:29:13.324235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.212 [2024-12-06 11:29:13.324248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.212 qpair failed and we were unable to recover it. 00:30:07.212 [2024-12-06 11:29:13.334238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.212 [2024-12-06 11:29:13.334293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.212 [2024-12-06 11:29:13.334306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.213 [2024-12-06 11:29:13.334313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.213 [2024-12-06 11:29:13.334320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.213 [2024-12-06 11:29:13.334337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.213 qpair failed and we were unable to recover it. 00:30:07.213 [2024-12-06 11:29:13.344268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.213 [2024-12-06 11:29:13.344319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.213 [2024-12-06 11:29:13.344332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.213 [2024-12-06 11:29:13.344340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.213 [2024-12-06 11:29:13.344347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.213 [2024-12-06 11:29:13.344361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.213 qpair failed and we were unable to recover it. 00:30:07.213 [2024-12-06 11:29:13.354310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.213 [2024-12-06 11:29:13.354364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.213 [2024-12-06 11:29:13.354377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.213 [2024-12-06 11:29:13.354384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.213 [2024-12-06 11:29:13.354391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.213 [2024-12-06 11:29:13.354404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.213 qpair failed and we were unable to recover it. 00:30:07.213 [2024-12-06 11:29:13.364218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.213 [2024-12-06 11:29:13.364290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.213 [2024-12-06 11:29:13.364306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.213 [2024-12-06 11:29:13.364314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.213 [2024-12-06 11:29:13.364320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.213 [2024-12-06 11:29:13.364335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.213 qpair failed and we were unable to recover it. 00:30:07.213 [2024-12-06 11:29:13.374369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.213 [2024-12-06 11:29:13.374417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.213 [2024-12-06 11:29:13.374430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.213 [2024-12-06 11:29:13.374438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.213 [2024-12-06 11:29:13.374445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.213 [2024-12-06 11:29:13.374458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.213 qpair failed and we were unable to recover it. 00:30:07.475 [2024-12-06 11:29:13.384384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.475 [2024-12-06 11:29:13.384484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.475 [2024-12-06 11:29:13.384497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.475 [2024-12-06 11:29:13.384505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.475 [2024-12-06 11:29:13.384511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.475 [2024-12-06 11:29:13.384525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.475 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.394422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.394483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.394496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.394504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.394511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.394524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.404431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.404524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.404539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.404547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.404554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.404568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.414467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.414524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.414538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.414545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.414552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.414565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.424527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.424625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.424638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.424649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.424656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.424670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.434529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.434592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.434617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.434626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.434634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.434653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.444563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.444633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.444658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.444667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.444675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.444694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.454581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.454642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.454667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.454676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.454684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.454704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.464614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.464672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.464687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.464695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.464702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.464727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.474517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.474570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.474585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.474593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.474600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.474615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.484654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.484715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.484729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.484737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.484744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.484758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.494696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.494774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.494788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.494796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.494802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.494816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.504712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.504761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.504774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.504782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.504788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.504802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.476 [2024-12-06 11:29:13.514810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.476 [2024-12-06 11:29:13.514873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.476 [2024-12-06 11:29:13.514887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.476 [2024-12-06 11:29:13.514894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.476 [2024-12-06 11:29:13.514901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.476 [2024-12-06 11:29:13.514915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.476 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.524793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.524847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.524865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.524873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.524880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.524896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.534792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.534884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.534898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.534906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.534912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.534926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.544704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.544759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.544773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.544781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.544788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.544801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.554901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.554956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.554970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.554981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.554988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.555002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.564890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.564950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.564964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.564972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.564978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.564992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.574885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.574936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.574950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.574958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.574965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.574978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.584910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.584975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.584988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.584996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.585002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.585016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.594949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.595009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.595023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.595030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.595037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.595055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.605009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.605065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.605079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.605087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.605093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.605107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.614944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.614994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.615008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.615015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.615022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.615035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.624912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.624969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.624983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.624990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.624997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.625010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.477 [2024-12-06 11:29:13.635122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.477 [2024-12-06 11:29:13.635198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.477 [2024-12-06 11:29:13.635211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.477 [2024-12-06 11:29:13.635218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.477 [2024-12-06 11:29:13.635225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.477 [2024-12-06 11:29:13.635239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.477 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.645071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.645134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.645148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.645155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.645162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.645175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.655091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.655140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.655154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.655162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.655168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.655182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.665154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.665213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.665227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.665234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.665240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.665254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.675204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.675261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.675274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.675282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.675288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.675302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.685125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.685176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.685190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.685201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.685207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.685221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.695204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.695278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.695294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.695302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.695309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.695323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.705282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.705339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.705353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.705361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.705368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.705383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.715283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.715356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.715370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.715377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.715384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.715398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.725360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.725416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.725430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.725437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.725444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.725462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.735325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.735370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.735383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.735391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.735397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.735411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.745385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.745459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.745472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.745480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.745487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.745501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.755410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.755466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.740 [2024-12-06 11:29:13.755480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.740 [2024-12-06 11:29:13.755488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.740 [2024-12-06 11:29:13.755494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.740 [2024-12-06 11:29:13.755508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.740 qpair failed and we were unable to recover it. 00:30:07.740 [2024-12-06 11:29:13.765318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.740 [2024-12-06 11:29:13.765371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.765384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.765392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.765399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.765412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.775429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.775481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.775497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.775504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.775512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.775530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.785500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.785553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.785567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.785574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.785581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.785595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.795527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.795585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.795598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.795606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.795612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.795626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.805554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.805612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.805626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.805633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.805640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.805654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.815538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.815588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.815603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.815614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.815621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.815635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.825573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.825624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.825639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.825646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.825653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.825667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.835623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.835679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.835692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.835700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.835707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.835720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.845662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.845716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.845729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.845737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.845744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.845757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.855517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.855563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.855577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.855584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.855591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.855608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.865682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.865742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.865755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.865762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.865769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.865782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.875739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.875822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.875836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.875843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.875850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.875868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.885791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.741 [2024-12-06 11:29:13.885850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.741 [2024-12-06 11:29:13.885867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.741 [2024-12-06 11:29:13.885875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.741 [2024-12-06 11:29:13.885882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.741 [2024-12-06 11:29:13.885895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.741 qpair failed and we were unable to recover it. 00:30:07.741 [2024-12-06 11:29:13.895721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.742 [2024-12-06 11:29:13.895779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.742 [2024-12-06 11:29:13.895793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.742 [2024-12-06 11:29:13.895800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.742 [2024-12-06 11:29:13.895806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:07.742 [2024-12-06 11:29:13.895820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.742 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:13.905834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.003 [2024-12-06 11:29:13.905897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.003 [2024-12-06 11:29:13.905911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.003 [2024-12-06 11:29:13.905918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.003 [2024-12-06 11:29:13.905925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.003 [2024-12-06 11:29:13.905938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.003 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:13.915883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.003 [2024-12-06 11:29:13.915939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.003 [2024-12-06 11:29:13.915953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.003 [2024-12-06 11:29:13.915961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.003 [2024-12-06 11:29:13.915967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.003 [2024-12-06 11:29:13.915981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.003 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:13.925861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.003 [2024-12-06 11:29:13.925944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.003 [2024-12-06 11:29:13.925958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.003 [2024-12-06 11:29:13.925966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.003 [2024-12-06 11:29:13.925973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.003 [2024-12-06 11:29:13.925986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.003 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:13.935853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.003 [2024-12-06 11:29:13.935910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.003 [2024-12-06 11:29:13.935923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.003 [2024-12-06 11:29:13.935931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.003 [2024-12-06 11:29:13.935937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.003 [2024-12-06 11:29:13.935951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.003 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:13.945965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.003 [2024-12-06 11:29:13.946057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.003 [2024-12-06 11:29:13.946071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.003 [2024-12-06 11:29:13.946081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.003 [2024-12-06 11:29:13.946088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.003 [2024-12-06 11:29:13.946102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.003 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:13.955967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.003 [2024-12-06 11:29:13.956025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.003 [2024-12-06 11:29:13.956039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.003 [2024-12-06 11:29:13.956046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.003 [2024-12-06 11:29:13.956053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.003 [2024-12-06 11:29:13.956066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.003 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:13.965969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.003 [2024-12-06 11:29:13.966022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.003 [2024-12-06 11:29:13.966036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.003 [2024-12-06 11:29:13.966043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.003 [2024-12-06 11:29:13.966050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.003 [2024-12-06 11:29:13.966063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.003 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:13.975855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.003 [2024-12-06 11:29:13.975905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.003 [2024-12-06 11:29:13.975919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.003 [2024-12-06 11:29:13.975927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.003 [2024-12-06 11:29:13.975933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.003 [2024-12-06 11:29:13.975947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.003 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:13.986040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.003 [2024-12-06 11:29:13.986091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.003 [2024-12-06 11:29:13.986104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.003 [2024-12-06 11:29:13.986111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.003 [2024-12-06 11:29:13.986118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.003 [2024-12-06 11:29:13.986136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.003 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:13.996175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.003 [2024-12-06 11:29:13.996242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.003 [2024-12-06 11:29:13.996256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.003 [2024-12-06 11:29:13.996263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.003 [2024-12-06 11:29:13.996270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.003 [2024-12-06 11:29:13.996283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.003 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:14.006136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.003 [2024-12-06 11:29:14.006193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.003 [2024-12-06 11:29:14.006207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.003 [2024-12-06 11:29:14.006214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.003 [2024-12-06 11:29:14.006221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.003 [2024-12-06 11:29:14.006234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.003 qpair failed and we were unable to recover it. 00:30:08.003 [2024-12-06 11:29:14.016173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.016247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.016262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.016270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.016278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.016297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.026224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.026280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.026295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.026303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.026311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.026326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.036206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.036260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.036275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.036283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.036289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.036303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.046197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.046249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.046262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.046270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.046276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.046290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.056184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.056227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.056240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.056248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.056254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.056268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.066264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.066316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.066330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.066337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.066344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.066357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.076311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.076369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.076385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.076393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.076400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.076414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.086348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.086400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.086414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.086421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.086427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.086441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.096293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.096337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.096351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.096358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.096365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.096379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.106383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.106485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.106499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.106507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.106514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.106528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.116358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.116416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.116429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.116437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.116444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.116462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.126380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.126433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.126446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.126454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.126460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.126474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.136297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.136349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.136362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.136369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.136376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.136390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.146477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.146530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.146543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.146551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.146557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.146571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.156519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.156574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.156588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.156595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.156602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.156616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.004 [2024-12-06 11:29:14.166511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.004 [2024-12-06 11:29:14.166567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.004 [2024-12-06 11:29:14.166581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.004 [2024-12-06 11:29:14.166588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.004 [2024-12-06 11:29:14.166595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.004 [2024-12-06 11:29:14.166609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.004 qpair failed and we were unable to recover it. 00:30:08.266 [2024-12-06 11:29:14.176529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.266 [2024-12-06 11:29:14.176578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.266 [2024-12-06 11:29:14.176593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.266 [2024-12-06 11:29:14.176600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.266 [2024-12-06 11:29:14.176607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.266 [2024-12-06 11:29:14.176620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.266 qpair failed and we were unable to recover it. 00:30:08.266 [2024-12-06 11:29:14.186631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.266 [2024-12-06 11:29:14.186689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.266 [2024-12-06 11:29:14.186703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.266 [2024-12-06 11:29:14.186710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.266 [2024-12-06 11:29:14.186717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.266 [2024-12-06 11:29:14.186730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.266 qpair failed and we were unable to recover it. 00:30:08.266 [2024-12-06 11:29:14.196504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.266 [2024-12-06 11:29:14.196561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.266 [2024-12-06 11:29:14.196575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.266 [2024-12-06 11:29:14.196584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.266 [2024-12-06 11:29:14.196591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.266 [2024-12-06 11:29:14.196605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.266 qpair failed and we were unable to recover it. 00:30:08.266 [2024-12-06 11:29:14.206693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.266 [2024-12-06 11:29:14.206744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.266 [2024-12-06 11:29:14.206761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.266 [2024-12-06 11:29:14.206768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.266 [2024-12-06 11:29:14.206775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.266 [2024-12-06 11:29:14.206789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.266 qpair failed and we were unable to recover it. 00:30:08.266 [2024-12-06 11:29:14.216660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.266 [2024-12-06 11:29:14.216740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.266 [2024-12-06 11:29:14.216755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.266 [2024-12-06 11:29:14.216762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.266 [2024-12-06 11:29:14.216770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.266 [2024-12-06 11:29:14.216787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.266 qpair failed and we were unable to recover it. 00:30:08.266 [2024-12-06 11:29:14.226684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.266 [2024-12-06 11:29:14.226732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.266 [2024-12-06 11:29:14.226747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.266 [2024-12-06 11:29:14.226754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.266 [2024-12-06 11:29:14.226761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.266 [2024-12-06 11:29:14.226775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.266 qpair failed and we were unable to recover it. 00:30:08.266 [2024-12-06 11:29:14.236653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.266 [2024-12-06 11:29:14.236703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.266 [2024-12-06 11:29:14.236716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.266 [2024-12-06 11:29:14.236724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.266 [2024-12-06 11:29:14.236731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.266 [2024-12-06 11:29:14.236744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.266 qpair failed and we were unable to recover it. 00:30:08.266 [2024-12-06 11:29:14.246690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.266 [2024-12-06 11:29:14.246741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.266 [2024-12-06 11:29:14.246755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.266 [2024-12-06 11:29:14.246762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.266 [2024-12-06 11:29:14.246769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.246790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.256729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.256778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.256791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.256798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.256805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.256819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.266747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.266797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.266811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.266818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.266825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.266839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.276825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.276902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.276917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.276924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.276931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.276945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.286825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.286904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.286918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.286925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.286933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.286947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.296727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.296771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.296785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.296792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.296799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.296813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.306908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.307006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.307019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.307027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.307034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.307048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.316901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.316969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.316983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.316990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.316997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.317011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.326942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.327023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.327037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.327045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.327052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.327066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.336926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.337016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.337033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.337041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.337048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.337062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.346878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.346937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.346951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.346959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.346965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.346980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.357015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.357070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.357083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.357090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.357097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.357110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.367063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.367166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.367183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.367192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.367199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.267 [2024-12-06 11:29:14.367214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.267 qpair failed and we were unable to recover it. 00:30:08.267 [2024-12-06 11:29:14.377041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.267 [2024-12-06 11:29:14.377086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.267 [2024-12-06 11:29:14.377100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.267 [2024-12-06 11:29:14.377108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.267 [2024-12-06 11:29:14.377114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.268 [2024-12-06 11:29:14.377132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.268 qpair failed and we were unable to recover it. 00:30:08.268 [2024-12-06 11:29:14.387090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.268 [2024-12-06 11:29:14.387141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.268 [2024-12-06 11:29:14.387154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.268 [2024-12-06 11:29:14.387162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.268 [2024-12-06 11:29:14.387168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.268 [2024-12-06 11:29:14.387182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.268 qpair failed and we were unable to recover it. 00:30:08.268 [2024-12-06 11:29:14.397071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.268 [2024-12-06 11:29:14.397159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.268 [2024-12-06 11:29:14.397173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.268 [2024-12-06 11:29:14.397181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.268 [2024-12-06 11:29:14.397188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.268 [2024-12-06 11:29:14.397202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.268 qpair failed and we were unable to recover it. 00:30:08.268 [2024-12-06 11:29:14.407142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.268 [2024-12-06 11:29:14.407205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.268 [2024-12-06 11:29:14.407218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.268 [2024-12-06 11:29:14.407226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.268 [2024-12-06 11:29:14.407233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.268 [2024-12-06 11:29:14.407247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.268 qpair failed and we were unable to recover it. 00:30:08.268 [2024-12-06 11:29:14.417025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.268 [2024-12-06 11:29:14.417068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.268 [2024-12-06 11:29:14.417081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.268 [2024-12-06 11:29:14.417089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.268 [2024-12-06 11:29:14.417096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.268 [2024-12-06 11:29:14.417110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.268 qpair failed and we were unable to recover it. 00:30:08.268 [2024-12-06 11:29:14.427189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.268 [2024-12-06 11:29:14.427235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.268 [2024-12-06 11:29:14.427249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.268 [2024-12-06 11:29:14.427256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.268 [2024-12-06 11:29:14.427263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.268 [2024-12-06 11:29:14.427277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.268 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.437175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.437224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.437239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.437247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.437254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.437268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.447234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.447282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.447296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.447303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.447310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.447323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.457238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.457289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.457302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.457310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.457316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.457330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.467263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.467356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.467373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.467381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.467387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.467402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.477184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.477239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.477253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.477260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.477267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.477281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.487332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.487380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.487394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.487401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.487408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.487423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.497354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.497438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.497452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.497460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.497466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.497480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.507366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.507412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.507426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.507433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.507443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.507457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.517401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.517458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.517472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.517479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.517486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.517499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.527335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.527401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.527415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.527423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.527430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.527444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.537333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.537385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.537399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.537407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.537413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.537427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.547495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.547549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.547563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.547570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.547576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.547590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.557388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.557446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.557459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.557467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.557473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.557487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.567538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.567586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.567599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.567607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.567614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.567627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.577520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.577573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.577599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.577608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.577616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.577636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.587549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.587596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.587611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.587619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.587626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.587642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.597601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.597705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.597735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.597744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.597751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.597770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.607636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.607702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.607718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.607726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.607733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.607748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.617684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.617732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.617746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.617754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.617760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.617775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.627662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.627706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.627720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.627727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.627734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.627748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.637583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.530 [2024-12-06 11:29:14.637657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.530 [2024-12-06 11:29:14.637671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.530 [2024-12-06 11:29:14.637679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.530 [2024-12-06 11:29:14.637690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.530 [2024-12-06 11:29:14.637704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.530 qpair failed and we were unable to recover it. 00:30:08.530 [2024-12-06 11:29:14.647801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-12-06 11:29:14.647856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-12-06 11:29:14.647876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-12-06 11:29:14.647883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.531 [2024-12-06 11:29:14.647890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.531 [2024-12-06 11:29:14.647905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.531 qpair failed and we were unable to recover it. 00:30:08.531 [2024-12-06 11:29:14.657785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-12-06 11:29:14.657836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-12-06 11:29:14.657850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-12-06 11:29:14.657858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.531 [2024-12-06 11:29:14.657868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.531 [2024-12-06 11:29:14.657882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.531 qpair failed and we were unable to recover it. 00:30:08.531 [2024-12-06 11:29:14.667812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-12-06 11:29:14.667859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-12-06 11:29:14.667878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-12-06 11:29:14.667886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.531 [2024-12-06 11:29:14.667892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.531 [2024-12-06 11:29:14.667906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.531 qpair failed and we were unable to recover it. 00:30:08.531 [2024-12-06 11:29:14.677837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-12-06 11:29:14.677883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-12-06 11:29:14.677897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-12-06 11:29:14.677905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.531 [2024-12-06 11:29:14.677911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.531 [2024-12-06 11:29:14.677926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.531 qpair failed and we were unable to recover it. 00:30:08.531 [2024-12-06 11:29:14.687881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.531 [2024-12-06 11:29:14.687981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.531 [2024-12-06 11:29:14.687995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.531 [2024-12-06 11:29:14.688002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.531 [2024-12-06 11:29:14.688009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.531 [2024-12-06 11:29:14.688024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.531 qpair failed and we were unable to recover it. 00:30:08.792 [2024-12-06 11:29:14.697847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-12-06 11:29:14.697896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-12-06 11:29:14.697910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-12-06 11:29:14.697917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-12-06 11:29:14.697924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.792 [2024-12-06 11:29:14.697938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-12-06 11:29:14.707900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-12-06 11:29:14.707943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-12-06 11:29:14.707957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-12-06 11:29:14.707964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-12-06 11:29:14.707971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.792 [2024-12-06 11:29:14.707986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-12-06 11:29:14.717901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-12-06 11:29:14.717949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-12-06 11:29:14.717964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-12-06 11:29:14.717972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-12-06 11:29:14.717978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.792 [2024-12-06 11:29:14.717993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-12-06 11:29:14.727836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.792 [2024-12-06 11:29:14.727889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.792 [2024-12-06 11:29:14.727906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.792 [2024-12-06 11:29:14.727913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.792 [2024-12-06 11:29:14.727920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.727935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.737869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.737922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.737935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.737943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.737950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.737964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.747895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.747949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.747963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.747971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.747978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.747992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.758036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.758083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.758097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.758104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.758111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.758124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.768079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.768131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.768144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.768152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.768162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.768176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.778078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.778121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.778134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.778143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.778150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.778163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.788109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.788161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.788174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.788182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.788188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.788203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.798149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.798199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.798213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.798220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.798227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.798241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.808189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.808242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.808255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.808263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.808270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.808283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.818063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.818107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.818121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.818128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.818135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.818148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.828118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.828165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.828179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.828187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.828193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.828207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.838124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.838175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.838189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.838196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.838203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.838217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.848156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.848201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.793 [2024-12-06 11:29:14.848214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.793 [2024-12-06 11:29:14.848221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.793 [2024-12-06 11:29:14.848228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.793 [2024-12-06 11:29:14.848242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-12-06 11:29:14.858320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.793 [2024-12-06 11:29:14.858372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.794 [2024-12-06 11:29:14.858390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.794 [2024-12-06 11:29:14.858398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.794 [2024-12-06 11:29:14.858404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.794 [2024-12-06 11:29:14.858419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-12-06 11:29:14.868336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.794 [2024-12-06 11:29:14.868381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.794 [2024-12-06 11:29:14.868394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.794 [2024-12-06 11:29:14.868402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.794 [2024-12-06 11:29:14.868408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.794 [2024-12-06 11:29:14.868422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-12-06 11:29:14.878260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.794 [2024-12-06 11:29:14.878308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.794 [2024-12-06 11:29:14.878321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.794 [2024-12-06 11:29:14.878329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.794 [2024-12-06 11:29:14.878336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.794 [2024-12-06 11:29:14.878349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-12-06 11:29:14.888398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.794 [2024-12-06 11:29:14.888449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.794 [2024-12-06 11:29:14.888462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.794 [2024-12-06 11:29:14.888470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.794 [2024-12-06 11:29:14.888476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.794 [2024-12-06 11:29:14.888490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-12-06 11:29:14.898418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.794 [2024-12-06 11:29:14.898461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.794 [2024-12-06 11:29:14.898475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.794 [2024-12-06 11:29:14.898482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.794 [2024-12-06 11:29:14.898496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.794 [2024-12-06 11:29:14.898510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-12-06 11:29:14.908437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.794 [2024-12-06 11:29:14.908484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.794 [2024-12-06 11:29:14.908497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.794 [2024-12-06 11:29:14.908505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.794 [2024-12-06 11:29:14.908511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.794 [2024-12-06 11:29:14.908525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-12-06 11:29:14.918472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.794 [2024-12-06 11:29:14.918533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.794 [2024-12-06 11:29:14.918546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.794 [2024-12-06 11:29:14.918554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.794 [2024-12-06 11:29:14.918560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.794 [2024-12-06 11:29:14.918574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-12-06 11:29:14.928375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.794 [2024-12-06 11:29:14.928464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.794 [2024-12-06 11:29:14.928478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.794 [2024-12-06 11:29:14.928486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.794 [2024-12-06 11:29:14.928493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.794 [2024-12-06 11:29:14.928506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-12-06 11:29:14.938505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.794 [2024-12-06 11:29:14.938550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.794 [2024-12-06 11:29:14.938563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.794 [2024-12-06 11:29:14.938571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.794 [2024-12-06 11:29:14.938578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.794 [2024-12-06 11:29:14.938592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-12-06 11:29:14.948432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.794 [2024-12-06 11:29:14.948519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.794 [2024-12-06 11:29:14.948533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.794 [2024-12-06 11:29:14.948541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.794 [2024-12-06 11:29:14.948548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:08.794 [2024-12-06 11:29:14.948561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.794 qpair failed and we were unable to recover it. 00:30:09.058 [2024-12-06 11:29:14.958596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.058 [2024-12-06 11:29:14.958644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.058 [2024-12-06 11:29:14.958658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.058 [2024-12-06 11:29:14.958665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.058 [2024-12-06 11:29:14.958671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.058 [2024-12-06 11:29:14.958685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.058 qpair failed and we were unable to recover it. 00:30:09.058 [2024-12-06 11:29:14.968647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.058 [2024-12-06 11:29:14.968694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.058 [2024-12-06 11:29:14.968708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.058 [2024-12-06 11:29:14.968715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.058 [2024-12-06 11:29:14.968722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.058 [2024-12-06 11:29:14.968736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.058 qpair failed and we were unable to recover it. 00:30:09.058 [2024-12-06 11:29:14.978639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.058 [2024-12-06 11:29:14.978683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.058 [2024-12-06 11:29:14.978696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.058 [2024-12-06 11:29:14.978704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.058 [2024-12-06 11:29:14.978711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.058 [2024-12-06 11:29:14.978724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.058 qpair failed and we were unable to recover it. 00:30:09.058 [2024-12-06 11:29:14.988648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.058 [2024-12-06 11:29:14.988697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.058 [2024-12-06 11:29:14.988713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.058 [2024-12-06 11:29:14.988721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.058 [2024-12-06 11:29:14.988727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.058 [2024-12-06 11:29:14.988741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.058 qpair failed and we were unable to recover it. 00:30:09.058 [2024-12-06 11:29:14.998666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.058 [2024-12-06 11:29:14.998713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.058 [2024-12-06 11:29:14.998728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.058 [2024-12-06 11:29:14.998735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.058 [2024-12-06 11:29:14.998742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.058 [2024-12-06 11:29:14.998756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.058 qpair failed and we were unable to recover it. 00:30:09.058 [2024-12-06 11:29:15.008730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.058 [2024-12-06 11:29:15.008827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.058 [2024-12-06 11:29:15.008841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.058 [2024-12-06 11:29:15.008849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.058 [2024-12-06 11:29:15.008856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.058 [2024-12-06 11:29:15.008874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.058 qpair failed and we were unable to recover it. 00:30:09.058 [2024-12-06 11:29:15.018607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.058 [2024-12-06 11:29:15.018653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.058 [2024-12-06 11:29:15.018666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.058 [2024-12-06 11:29:15.018674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.058 [2024-12-06 11:29:15.018681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.058 [2024-12-06 11:29:15.018694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.058 qpair failed and we were unable to recover it. 00:30:09.058 [2024-12-06 11:29:15.028815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.058 [2024-12-06 11:29:15.028890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.058 [2024-12-06 11:29:15.028904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.058 [2024-12-06 11:29:15.028913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.058 [2024-12-06 11:29:15.028924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.058 [2024-12-06 11:29:15.028938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.058 qpair failed and we were unable to recover it. 00:30:09.058 [2024-12-06 11:29:15.038788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.058 [2024-12-06 11:29:15.038835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.058 [2024-12-06 11:29:15.038848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.058 [2024-12-06 11:29:15.038856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.058 [2024-12-06 11:29:15.038866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.058 [2024-12-06 11:29:15.038880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.058 qpair failed and we were unable to recover it. 00:30:09.058 [2024-12-06 11:29:15.048796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.058 [2024-12-06 11:29:15.048844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.058 [2024-12-06 11:29:15.048857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.058 [2024-12-06 11:29:15.048869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.058 [2024-12-06 11:29:15.048876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.048890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.058840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.058904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.058918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.058926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.058933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.058948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.068851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.068896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.068909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.068917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.068923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.068937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.078917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.078965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.078979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.078987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.078994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.079008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.088954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.089002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.089015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.089023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.089030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.089044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.098854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.098908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.098922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.098929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.098936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.098950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.108846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.108896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.108909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.108917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.108923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.108937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.119014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.119064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.119086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.119094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.119100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.119115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.129037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.129087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.129101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.129108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.129115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.129128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.139060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.139132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.139146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.139154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.139161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.139175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.149074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.149121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.149135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.149142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.149149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.149162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.159109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.159155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.159168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.159175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.159185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.159199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.169166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.169214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.169227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.169235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.169241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.059 [2024-12-06 11:29:15.169254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.059 qpair failed and we were unable to recover it. 00:30:09.059 [2024-12-06 11:29:15.179120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.059 [2024-12-06 11:29:15.179165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.059 [2024-12-06 11:29:15.179178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.059 [2024-12-06 11:29:15.179186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.059 [2024-12-06 11:29:15.179192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.060 [2024-12-06 11:29:15.179206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.060 qpair failed and we were unable to recover it. 00:30:09.060 [2024-12-06 11:29:15.189171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.060 [2024-12-06 11:29:15.189221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.060 [2024-12-06 11:29:15.189234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.060 [2024-12-06 11:29:15.189242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.060 [2024-12-06 11:29:15.189248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.060 [2024-12-06 11:29:15.189262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.060 qpair failed and we were unable to recover it. 00:30:09.060 [2024-12-06 11:29:15.199215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.060 [2024-12-06 11:29:15.199301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.060 [2024-12-06 11:29:15.199316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.060 [2024-12-06 11:29:15.199323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.060 [2024-12-06 11:29:15.199330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.060 [2024-12-06 11:29:15.199344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.060 qpair failed and we were unable to recover it. 00:30:09.060 [2024-12-06 11:29:15.209113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.060 [2024-12-06 11:29:15.209160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.060 [2024-12-06 11:29:15.209174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.060 [2024-12-06 11:29:15.209181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.060 [2024-12-06 11:29:15.209188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.060 [2024-12-06 11:29:15.209202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.060 qpair failed and we were unable to recover it. 00:30:09.060 [2024-12-06 11:29:15.219264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.060 [2024-12-06 11:29:15.219308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.060 [2024-12-06 11:29:15.219322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.060 [2024-12-06 11:29:15.219329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.060 [2024-12-06 11:29:15.219336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.060 [2024-12-06 11:29:15.219350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.060 qpair failed and we were unable to recover it. 00:30:09.321 [2024-12-06 11:29:15.229310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.321 [2024-12-06 11:29:15.229356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.321 [2024-12-06 11:29:15.229369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.321 [2024-12-06 11:29:15.229377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.321 [2024-12-06 11:29:15.229384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.321 [2024-12-06 11:29:15.229398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-12-06 11:29:15.239339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.321 [2024-12-06 11:29:15.239387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.321 [2024-12-06 11:29:15.239400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.321 [2024-12-06 11:29:15.239408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.321 [2024-12-06 11:29:15.239414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.321 [2024-12-06 11:29:15.239428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-12-06 11:29:15.249224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.321 [2024-12-06 11:29:15.249273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.321 [2024-12-06 11:29:15.249289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.321 [2024-12-06 11:29:15.249297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.321 [2024-12-06 11:29:15.249304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.321 [2024-12-06 11:29:15.249318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-12-06 11:29:15.259333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.321 [2024-12-06 11:29:15.259377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.321 [2024-12-06 11:29:15.259391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.321 [2024-12-06 11:29:15.259398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.321 [2024-12-06 11:29:15.259405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.321 [2024-12-06 11:29:15.259418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-12-06 11:29:15.269346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.321 [2024-12-06 11:29:15.269394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.321 [2024-12-06 11:29:15.269409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.321 [2024-12-06 11:29:15.269417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.321 [2024-12-06 11:29:15.269423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.321 [2024-12-06 11:29:15.269437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-12-06 11:29:15.279443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.321 [2024-12-06 11:29:15.279530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.321 [2024-12-06 11:29:15.279544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.321 [2024-12-06 11:29:15.279552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.321 [2024-12-06 11:29:15.279559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.321 [2024-12-06 11:29:15.279574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.321 [2024-12-06 11:29:15.289456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.321 [2024-12-06 11:29:15.289506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.321 [2024-12-06 11:29:15.289519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.321 [2024-12-06 11:29:15.289527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.321 [2024-12-06 11:29:15.289537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.321 [2024-12-06 11:29:15.289551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.321 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.299466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.299514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.299528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.299536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.299542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.299556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.309493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.309538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.309552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.309559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.309566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.309579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.319570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.319653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.319667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.319675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.319682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.319696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.329584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.329633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.329646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.329654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.329660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.329674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.339456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.339503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.339516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.339523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.339530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.339544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.349603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.349690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.349704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.349712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.349719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.349732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.359628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.359683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.359708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.359717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.359725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.359744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.369678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.369728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.369745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.369753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.369760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.369777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.379698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.379742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.379760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.379768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.379775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.379790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.389587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.389633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.389647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.389654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.389661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.389675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.399815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.399870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.399885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.399892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.399899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.399913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.409654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.409704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.409717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.409725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.409731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.409745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.419706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.322 [2024-12-06 11:29:15.419752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.322 [2024-12-06 11:29:15.419766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.322 [2024-12-06 11:29:15.419773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.322 [2024-12-06 11:29:15.419783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.322 [2024-12-06 11:29:15.419797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.322 qpair failed and we were unable to recover it. 00:30:09.322 [2024-12-06 11:29:15.429879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.323 [2024-12-06 11:29:15.429925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.323 [2024-12-06 11:29:15.429938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.323 [2024-12-06 11:29:15.429946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.323 [2024-12-06 11:29:15.429952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.323 [2024-12-06 11:29:15.429967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-12-06 11:29:15.439729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.323 [2024-12-06 11:29:15.439774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.323 [2024-12-06 11:29:15.439788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.323 [2024-12-06 11:29:15.439795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.323 [2024-12-06 11:29:15.439802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.323 [2024-12-06 11:29:15.439815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-12-06 11:29:15.449864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.323 [2024-12-06 11:29:15.449910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.323 [2024-12-06 11:29:15.449924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.323 [2024-12-06 11:29:15.449932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.323 [2024-12-06 11:29:15.449938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.323 [2024-12-06 11:29:15.449952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-12-06 11:29:15.459914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.323 [2024-12-06 11:29:15.459963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.323 [2024-12-06 11:29:15.459976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.323 [2024-12-06 11:29:15.459984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.323 [2024-12-06 11:29:15.459991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.323 [2024-12-06 11:29:15.460005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-12-06 11:29:15.469914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.323 [2024-12-06 11:29:15.469954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.323 [2024-12-06 11:29:15.469968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.323 [2024-12-06 11:29:15.469975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.323 [2024-12-06 11:29:15.469982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.323 [2024-12-06 11:29:15.469996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.323 [2024-12-06 11:29:15.479989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.323 [2024-12-06 11:29:15.480037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.323 [2024-12-06 11:29:15.480052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.323 [2024-12-06 11:29:15.480060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.323 [2024-12-06 11:29:15.480067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.323 [2024-12-06 11:29:15.480081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.323 qpair failed and we were unable to recover it. 00:30:09.584 [2024-12-06 11:29:15.490051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.584 [2024-12-06 11:29:15.490133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.584 [2024-12-06 11:29:15.490146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.584 [2024-12-06 11:29:15.490154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.584 [2024-12-06 11:29:15.490162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.584 [2024-12-06 11:29:15.490176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-12-06 11:29:15.500044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.584 [2024-12-06 11:29:15.500090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.584 [2024-12-06 11:29:15.500103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.584 [2024-12-06 11:29:15.500111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.584 [2024-12-06 11:29:15.500118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.584 [2024-12-06 11:29:15.500132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-12-06 11:29:15.510035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.584 [2024-12-06 11:29:15.510079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.584 [2024-12-06 11:29:15.510096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.584 [2024-12-06 11:29:15.510104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.584 [2024-12-06 11:29:15.510111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.584 [2024-12-06 11:29:15.510125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-12-06 11:29:15.519959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.584 [2024-12-06 11:29:15.520035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.584 [2024-12-06 11:29:15.520050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.584 [2024-12-06 11:29:15.520058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.584 [2024-12-06 11:29:15.520064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.584 [2024-12-06 11:29:15.520080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.530090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.530136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.530151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.530159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.530166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.530179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.540097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.540179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.540192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.540201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.540207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.540221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.550026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.550075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.550089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.550096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.550110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.550124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.560184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.560272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.560286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.560293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.560301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.560314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.570197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.570244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.570258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.570265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.570272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.570285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.580196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.580246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.580260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.580267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.580274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.580288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.590252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.590297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.590311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.590318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.590325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.590338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.600294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.600340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.600354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.600362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.600368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.600382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.610320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.610371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.610384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.610392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.610399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.610412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.620352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.620394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.620407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.620414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.620421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.620434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.630338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.630401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.630415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.630423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.630429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.630443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.640257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.640303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.640319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.640327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.640334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.640347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-12-06 11:29:15.650389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.585 [2024-12-06 11:29:15.650436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.585 [2024-12-06 11:29:15.650450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.585 [2024-12-06 11:29:15.650458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.585 [2024-12-06 11:29:15.650464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.585 [2024-12-06 11:29:15.650478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.586 [2024-12-06 11:29:15.660435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.586 [2024-12-06 11:29:15.660482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.586 [2024-12-06 11:29:15.660495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.586 [2024-12-06 11:29:15.660503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.586 [2024-12-06 11:29:15.660510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.586 [2024-12-06 11:29:15.660523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-12-06 11:29:15.670464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.586 [2024-12-06 11:29:15.670558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.586 [2024-12-06 11:29:15.670572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.586 [2024-12-06 11:29:15.670580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.586 [2024-12-06 11:29:15.670591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.586 [2024-12-06 11:29:15.670606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-12-06 11:29:15.680372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.586 [2024-12-06 11:29:15.680419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.586 [2024-12-06 11:29:15.680434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.586 [2024-12-06 11:29:15.680441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.586 [2024-12-06 11:29:15.680451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.586 [2024-12-06 11:29:15.680466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-12-06 11:29:15.690530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.586 [2024-12-06 11:29:15.690578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.586 [2024-12-06 11:29:15.690593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.586 [2024-12-06 11:29:15.690600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.586 [2024-12-06 11:29:15.690607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.586 [2024-12-06 11:29:15.690621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-12-06 11:29:15.700563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.586 [2024-12-06 11:29:15.700652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.586 [2024-12-06 11:29:15.700666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.586 [2024-12-06 11:29:15.700673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.586 [2024-12-06 11:29:15.700680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.586 [2024-12-06 11:29:15.700694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-12-06 11:29:15.710624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.586 [2024-12-06 11:29:15.710695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.586 [2024-12-06 11:29:15.710709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.586 [2024-12-06 11:29:15.710716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.586 [2024-12-06 11:29:15.710722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.586 [2024-12-06 11:29:15.710736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-12-06 11:29:15.720593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.586 [2024-12-06 11:29:15.720649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.586 [2024-12-06 11:29:15.720662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.586 [2024-12-06 11:29:15.720670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.586 [2024-12-06 11:29:15.720676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.586 [2024-12-06 11:29:15.720690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-12-06 11:29:15.730621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.586 [2024-12-06 11:29:15.730670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.586 [2024-12-06 11:29:15.730684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.586 [2024-12-06 11:29:15.730692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.586 [2024-12-06 11:29:15.730698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.586 [2024-12-06 11:29:15.730712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-12-06 11:29:15.740678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.586 [2024-12-06 11:29:15.740765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.586 [2024-12-06 11:29:15.740779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.586 [2024-12-06 11:29:15.740787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.586 [2024-12-06 11:29:15.740793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.586 [2024-12-06 11:29:15.740807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.847 [2024-12-06 11:29:15.750682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.847 [2024-12-06 11:29:15.750727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.847 [2024-12-06 11:29:15.750741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.847 [2024-12-06 11:29:15.750749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.847 [2024-12-06 11:29:15.750755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.847 [2024-12-06 11:29:15.750769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.847 qpair failed and we were unable to recover it. 00:30:09.847 [2024-12-06 11:29:15.760710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.847 [2024-12-06 11:29:15.760759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.847 [2024-12-06 11:29:15.760773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.847 [2024-12-06 11:29:15.760780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.847 [2024-12-06 11:29:15.760787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.847 [2024-12-06 11:29:15.760800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.847 qpair failed and we were unable to recover it. 00:30:09.847 [2024-12-06 11:29:15.770757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.847 [2024-12-06 11:29:15.770833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.847 [2024-12-06 11:29:15.770850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.770858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.770874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.770889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.780755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.780802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.780816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.780823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.780830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.780844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.790786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.790839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.790852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.790859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.790869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.790883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.800839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.800939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.800953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.800960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.800967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.800980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.810865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.810914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.810928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.810935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.810945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.810959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.820870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.820917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.820931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.820938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.820944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.820959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.830770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.830813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.830826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.830834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.830841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.830855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.840946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.840995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.841008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.841016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.841023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.841037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.850966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.851033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.851046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.851054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.851060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.851074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.861002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.861072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.861085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.861092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.861099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.861113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.871019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.871063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.871076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.871084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.871091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.871104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.881053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.881099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.881113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.881120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.881127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.881141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.891077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.891131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.891145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.891153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.848 [2024-12-06 11:29:15.891160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.848 [2024-12-06 11:29:15.891174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.848 qpair failed and we were unable to recover it. 00:30:09.848 [2024-12-06 11:29:15.901098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.848 [2024-12-06 11:29:15.901178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.848 [2024-12-06 11:29:15.901196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.848 [2024-12-06 11:29:15.901203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:15.901214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:15.901228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:09.849 [2024-12-06 11:29:15.911107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.849 [2024-12-06 11:29:15.911156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.849 [2024-12-06 11:29:15.911170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.849 [2024-12-06 11:29:15.911177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:15.911183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:15.911197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:09.849 [2024-12-06 11:29:15.921110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.849 [2024-12-06 11:29:15.921167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.849 [2024-12-06 11:29:15.921180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.849 [2024-12-06 11:29:15.921187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:15.921194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:15.921207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:09.849 [2024-12-06 11:29:15.931173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.849 [2024-12-06 11:29:15.931275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.849 [2024-12-06 11:29:15.931288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.849 [2024-12-06 11:29:15.931295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:15.931302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:15.931316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:09.849 [2024-12-06 11:29:15.941169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.849 [2024-12-06 11:29:15.941215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.849 [2024-12-06 11:29:15.941229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.849 [2024-12-06 11:29:15.941236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:15.941246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:15.941260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:09.849 [2024-12-06 11:29:15.951213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.849 [2024-12-06 11:29:15.951258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.849 [2024-12-06 11:29:15.951272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.849 [2024-12-06 11:29:15.951279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:15.951285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:15.951299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:09.849 [2024-12-06 11:29:15.961235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.849 [2024-12-06 11:29:15.961294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.849 [2024-12-06 11:29:15.961308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.849 [2024-12-06 11:29:15.961315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:15.961321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:15.961335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:09.849 [2024-12-06 11:29:15.971158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.849 [2024-12-06 11:29:15.971204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.849 [2024-12-06 11:29:15.971218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.849 [2024-12-06 11:29:15.971225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:15.971231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:15.971245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:09.849 [2024-12-06 11:29:15.981173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.849 [2024-12-06 11:29:15.981225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.849 [2024-12-06 11:29:15.981238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.849 [2024-12-06 11:29:15.981247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:15.981254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:15.981269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:09.849 [2024-12-06 11:29:15.991326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.849 [2024-12-06 11:29:15.991376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.849 [2024-12-06 11:29:15.991389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.849 [2024-12-06 11:29:15.991396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:15.991403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:15.991416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:09.849 [2024-12-06 11:29:16.001359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.849 [2024-12-06 11:29:16.001408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.849 [2024-12-06 11:29:16.001422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.849 [2024-12-06 11:29:16.001430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:16.001436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:16.001451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:09.849 [2024-12-06 11:29:16.011399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.849 [2024-12-06 11:29:16.011452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.849 [2024-12-06 11:29:16.011466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.849 [2024-12-06 11:29:16.011473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.849 [2024-12-06 11:29:16.011480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:09.849 [2024-12-06 11:29:16.011493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:09.849 qpair failed and we were unable to recover it. 00:30:10.111 [2024-12-06 11:29:16.021415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.111 [2024-12-06 11:29:16.021459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.111 [2024-12-06 11:29:16.021473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.111 [2024-12-06 11:29:16.021480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.111 [2024-12-06 11:29:16.021487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.111 [2024-12-06 11:29:16.021501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-12-06 11:29:16.031422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.111 [2024-12-06 11:29:16.031465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.111 [2024-12-06 11:29:16.031482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.111 [2024-12-06 11:29:16.031490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.111 [2024-12-06 11:29:16.031496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.111 [2024-12-06 11:29:16.031511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-12-06 11:29:16.041482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.111 [2024-12-06 11:29:16.041531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.111 [2024-12-06 11:29:16.041545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.111 [2024-12-06 11:29:16.041552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.111 [2024-12-06 11:29:16.041559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.111 [2024-12-06 11:29:16.041573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-12-06 11:29:16.051463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.111 [2024-12-06 11:29:16.051516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.111 [2024-12-06 11:29:16.051533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.111 [2024-12-06 11:29:16.051540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.111 [2024-12-06 11:29:16.051547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.111 [2024-12-06 11:29:16.051563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-12-06 11:29:16.061503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.111 [2024-12-06 11:29:16.061549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.111 [2024-12-06 11:29:16.061563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.111 [2024-12-06 11:29:16.061570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.111 [2024-12-06 11:29:16.061577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.111 [2024-12-06 11:29:16.061591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-12-06 11:29:16.071539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.111 [2024-12-06 11:29:16.071584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.111 [2024-12-06 11:29:16.071598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.111 [2024-12-06 11:29:16.071605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.111 [2024-12-06 11:29:16.071615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.111 [2024-12-06 11:29:16.071629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-12-06 11:29:16.081566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.111 [2024-12-06 11:29:16.081660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.111 [2024-12-06 11:29:16.081674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.111 [2024-12-06 11:29:16.081682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.111 [2024-12-06 11:29:16.081689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.111 [2024-12-06 11:29:16.081703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-12-06 11:29:16.091612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.111 [2024-12-06 11:29:16.091666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.111 [2024-12-06 11:29:16.091683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.111 [2024-12-06 11:29:16.091691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.111 [2024-12-06 11:29:16.091698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.111 [2024-12-06 11:29:16.091713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-12-06 11:29:16.101614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.111 [2024-12-06 11:29:16.101660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.111 [2024-12-06 11:29:16.101674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.111 [2024-12-06 11:29:16.101683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.111 [2024-12-06 11:29:16.101689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.111 [2024-12-06 11:29:16.101703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.111 qpair failed and we were unable to recover it. 00:30:10.111 [2024-12-06 11:29:16.111610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.111 [2024-12-06 11:29:16.111657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.111 [2024-12-06 11:29:16.111671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.111 [2024-12-06 11:29:16.111678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.111685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.111699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.121673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.121722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.121736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.121743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.121750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.121764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.131604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.131656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.131669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.131677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.131683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.131697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.141735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.141795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.141809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.141816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.141823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.141837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.151759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.151827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.151841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.151848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.151855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.151872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.161771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.161817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.161833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.161842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.161848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.161866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.171812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.171860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.171878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.171886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.171892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.171906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.181696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.181743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.181757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.181764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.181771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.181785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.191858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.191907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.191920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.191928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.191934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.191948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.201867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.201916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.201930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.201938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.201951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.201965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.211922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.211974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.211988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.211995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.212002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.212016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.221943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.221988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.222002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.222009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.222015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.222029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.231958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.232007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.232021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.232028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.232035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.232049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.241883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.241930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.112 [2024-12-06 11:29:16.241944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.112 [2024-12-06 11:29:16.241952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.112 [2024-12-06 11:29:16.241959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.112 [2024-12-06 11:29:16.241973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.112 qpair failed and we were unable to recover it. 00:30:10.112 [2024-12-06 11:29:16.252036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.112 [2024-12-06 11:29:16.252096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.113 [2024-12-06 11:29:16.252110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.113 [2024-12-06 11:29:16.252118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.113 [2024-12-06 11:29:16.252124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.113 [2024-12-06 11:29:16.252138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-12-06 11:29:16.262038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.113 [2024-12-06 11:29:16.262092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.113 [2024-12-06 11:29:16.262106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.113 [2024-12-06 11:29:16.262113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.113 [2024-12-06 11:29:16.262120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.113 [2024-12-06 11:29:16.262134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.113 [2024-12-06 11:29:16.272091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.113 [2024-12-06 11:29:16.272140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.113 [2024-12-06 11:29:16.272154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.113 [2024-12-06 11:29:16.272162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.113 [2024-12-06 11:29:16.272168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.113 [2024-12-06 11:29:16.272183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.113 qpair failed and we were unable to recover it. 00:30:10.374 [2024-12-06 11:29:16.282111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.374 [2024-12-06 11:29:16.282169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.374 [2024-12-06 11:29:16.282185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.374 [2024-12-06 11:29:16.282193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.375 [2024-12-06 11:29:16.282203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.375 [2024-12-06 11:29:16.282217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.375 qpair failed and we were unable to recover it. 00:30:10.375 [2024-12-06 11:29:16.292172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.375 [2024-12-06 11:29:16.292222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.375 [2024-12-06 11:29:16.292240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.375 [2024-12-06 11:29:16.292247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.375 [2024-12-06 11:29:16.292254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.375 [2024-12-06 11:29:16.292268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.375 qpair failed and we were unable to recover it. 00:30:10.375 [2024-12-06 11:29:16.302157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.375 [2024-12-06 11:29:16.302203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.375 [2024-12-06 11:29:16.302216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.375 [2024-12-06 11:29:16.302224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.375 [2024-12-06 11:29:16.302231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.375 [2024-12-06 11:29:16.302245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.375 qpair failed and we were unable to recover it. 00:30:10.375 [2024-12-06 11:29:16.312210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.375 [2024-12-06 11:29:16.312252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.375 [2024-12-06 11:29:16.312267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.375 [2024-12-06 11:29:16.312274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.375 [2024-12-06 11:29:16.312281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.375 [2024-12-06 11:29:16.312296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.375 qpair failed and we were unable to recover it. 00:30:10.375 [2024-12-06 11:29:16.322210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.375 [2024-12-06 11:29:16.322255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.375 [2024-12-06 11:29:16.322268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.375 [2024-12-06 11:29:16.322276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.375 [2024-12-06 11:29:16.322282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.375 [2024-12-06 11:29:16.322296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.375 qpair failed and we were unable to recover it. 00:30:10.375 [2024-12-06 11:29:16.332239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.375 [2024-12-06 11:29:16.332286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.375 [2024-12-06 11:29:16.332299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.375 [2024-12-06 11:29:16.332310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.375 [2024-12-06 11:29:16.332317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.375 [2024-12-06 11:29:16.332330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.375 qpair failed and we were unable to recover it. 00:30:10.375 [2024-12-06 11:29:16.342134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.375 [2024-12-06 11:29:16.342178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.375 [2024-12-06 11:29:16.342192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.375 [2024-12-06 11:29:16.342200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.375 [2024-12-06 11:29:16.342207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.375 [2024-12-06 11:29:16.342221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.375 qpair failed and we were unable to recover it. 00:30:10.375 [2024-12-06 11:29:16.352164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.375 [2024-12-06 11:29:16.352250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.375 [2024-12-06 11:29:16.352265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.375 [2024-12-06 11:29:16.352273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.375 [2024-12-06 11:29:16.352280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.375 [2024-12-06 11:29:16.352294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.375 qpair failed and we were unable to recover it. 00:30:10.375 [2024-12-06 11:29:16.362313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.375 [2024-12-06 11:29:16.362491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.375 [2024-12-06 11:29:16.362507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.375 [2024-12-06 11:29:16.362515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.375 [2024-12-06 11:29:16.362521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.375 [2024-12-06 11:29:16.362537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.375 qpair failed and we were unable to recover it. 00:30:10.375 [2024-12-06 11:29:16.372350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.375 [2024-12-06 11:29:16.372400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.375 [2024-12-06 11:29:16.372416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.375 [2024-12-06 11:29:16.372424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.375 [2024-12-06 11:29:16.372431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.375 [2024-12-06 11:29:16.372445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.375 qpair failed and we were unable to recover it. 00:30:10.375 [2024-12-06 11:29:16.382379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.375 [2024-12-06 11:29:16.382422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.375 [2024-12-06 11:29:16.382436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.375 [2024-12-06 11:29:16.382444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.375 [2024-12-06 11:29:16.382451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.375 [2024-12-06 11:29:16.382465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.375 qpair failed and we were unable to recover it. 00:30:10.375 [2024-12-06 11:29:16.392404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.375 [2024-12-06 11:29:16.392451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.375 [2024-12-06 11:29:16.392464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.376 [2024-12-06 11:29:16.392472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.376 [2024-12-06 11:29:16.392478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x239f490 00:30:10.376 [2024-12-06 11:29:16.392492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:10.376 qpair failed and we were unable to recover it. 00:30:10.376 [2024-12-06 11:29:16.402435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.376 [2024-12-06 11:29:16.402537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.376 [2024-12-06 11:29:16.402601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.376 [2024-12-06 11:29:16.402627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.376 [2024-12-06 11:29:16.402648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f078c000b90 00:30:10.376 [2024-12-06 11:29:16.402703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.376 qpair failed and we were unable to recover it. 00:30:10.376 [2024-12-06 11:29:16.412383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.376 [2024-12-06 11:29:16.412471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.376 [2024-12-06 11:29:16.412519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.376 [2024-12-06 11:29:16.412538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.376 [2024-12-06 11:29:16.412553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f078c000b90 00:30:10.376 [2024-12-06 11:29:16.412594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.376 qpair failed and we were unable to recover it. 00:30:10.376 [2024-12-06 11:29:16.422475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.376 [2024-12-06 11:29:16.422523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.376 [2024-12-06 11:29:16.422548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.376 [2024-12-06 11:29:16.422555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.376 [2024-12-06 11:29:16.422560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0784000b90 00:30:10.376 [2024-12-06 11:29:16.422574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.376 qpair failed and we were unable to recover it. 00:30:10.376 [2024-12-06 11:29:16.432368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.376 [2024-12-06 11:29:16.432413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.376 [2024-12-06 11:29:16.432424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.376 [2024-12-06 11:29:16.432429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.376 [2024-12-06 11:29:16.432434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0784000b90 00:30:10.376 [2024-12-06 11:29:16.432445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.376 qpair failed and we were unable to recover it. 00:30:10.376 [2024-12-06 11:29:16.432602] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:10.376 A controller has encountered a failure and is being reset. 00:30:10.376 [2024-12-06 11:29:16.432735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x239c030 (9): Bad file descriptor 00:30:10.376 Controller properly reset. 00:30:10.376 Initializing NVMe Controllers 00:30:10.376 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:10.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:10.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:10.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:10.376 Initialization complete. Launching workers. 00:30:10.376 Starting thread on core 1 00:30:10.376 Starting thread on core 2 00:30:10.376 Starting thread on core 3 00:30:10.376 Starting thread on core 0 00:30:10.376 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:10.376 00:30:10.376 real 0m11.368s 00:30:10.376 user 0m21.779s 00:30:10.376 sys 0m3.628s 00:30:10.376 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:10.376 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.376 ************************************ 00:30:10.376 END TEST nvmf_target_disconnect_tc2 00:30:10.376 ************************************ 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:10.636 rmmod nvme_tcp 00:30:10.636 rmmod nvme_fabrics 00:30:10.636 rmmod nvme_keyring 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3629575 ']' 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3629575 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3629575 ']' 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3629575 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3629575 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3629575' 00:30:10.636 killing process with pid 3629575 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3629575 00:30:10.636 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3629575 00:30:11.043 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.043 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.043 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.043 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:11.043 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.043 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:11.043 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.043 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.043 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.043 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.044 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.044 11:29:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.954 11:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:12.954 00:30:12.954 real 0m22.484s 00:30:12.954 user 0m49.643s 00:30:12.954 sys 0m10.400s 00:30:12.954 11:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.954 11:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:12.954 ************************************ 00:30:12.954 END TEST nvmf_target_disconnect 00:30:12.954 ************************************ 00:30:12.954 11:29:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:12.954 00:30:12.954 real 6m47.494s 00:30:12.954 user 11m27.970s 00:30:12.955 sys 2m24.646s 00:30:12.955 11:29:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.955 11:29:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.955 ************************************ 00:30:12.955 END TEST nvmf_host 00:30:12.955 ************************************ 00:30:12.955 11:29:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:12.955 11:29:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:12.955 11:29:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:12.955 11:29:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:12.955 11:29:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.955 11:29:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:12.955 ************************************ 00:30:12.955 START TEST nvmf_target_core_interrupt_mode 00:30:12.955 ************************************ 00:30:12.955 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:12.955 * Looking for test storage... 00:30:13.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:13.215 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:13.215 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:30:13.215 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:13.215 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:13.215 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.215 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:13.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.216 --rc genhtml_branch_coverage=1 00:30:13.216 --rc genhtml_function_coverage=1 00:30:13.216 --rc genhtml_legend=1 00:30:13.216 --rc geninfo_all_blocks=1 00:30:13.216 --rc geninfo_unexecuted_blocks=1 00:30:13.216 00:30:13.216 ' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:13.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.216 --rc genhtml_branch_coverage=1 00:30:13.216 --rc genhtml_function_coverage=1 00:30:13.216 --rc genhtml_legend=1 00:30:13.216 --rc geninfo_all_blocks=1 00:30:13.216 --rc geninfo_unexecuted_blocks=1 00:30:13.216 00:30:13.216 ' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:13.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.216 --rc genhtml_branch_coverage=1 00:30:13.216 --rc genhtml_function_coverage=1 00:30:13.216 --rc genhtml_legend=1 00:30:13.216 --rc geninfo_all_blocks=1 00:30:13.216 --rc geninfo_unexecuted_blocks=1 00:30:13.216 00:30:13.216 ' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:13.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.216 --rc genhtml_branch_coverage=1 00:30:13.216 --rc genhtml_function_coverage=1 00:30:13.216 --rc genhtml_legend=1 00:30:13.216 --rc geninfo_all_blocks=1 00:30:13.216 --rc geninfo_unexecuted_blocks=1 00:30:13.216 00:30:13.216 ' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:13.216 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:13.217 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:13.217 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:13.217 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:13.217 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:13.217 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:13.217 ************************************ 00:30:13.217 START TEST nvmf_abort 00:30:13.217 ************************************ 00:30:13.217 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:13.479 * Looking for test storage... 00:30:13.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:13.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.479 --rc genhtml_branch_coverage=1 00:30:13.479 --rc genhtml_function_coverage=1 00:30:13.479 --rc genhtml_legend=1 00:30:13.479 --rc geninfo_all_blocks=1 00:30:13.479 --rc geninfo_unexecuted_blocks=1 00:30:13.479 00:30:13.479 ' 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:13.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.479 --rc genhtml_branch_coverage=1 00:30:13.479 --rc genhtml_function_coverage=1 00:30:13.479 --rc genhtml_legend=1 00:30:13.479 --rc geninfo_all_blocks=1 00:30:13.479 --rc geninfo_unexecuted_blocks=1 00:30:13.479 00:30:13.479 ' 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:13.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.479 --rc genhtml_branch_coverage=1 00:30:13.479 --rc genhtml_function_coverage=1 00:30:13.479 --rc genhtml_legend=1 00:30:13.479 --rc geninfo_all_blocks=1 00:30:13.479 --rc geninfo_unexecuted_blocks=1 00:30:13.479 00:30:13.479 ' 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:13.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.479 --rc genhtml_branch_coverage=1 00:30:13.479 --rc genhtml_function_coverage=1 00:30:13.479 --rc genhtml_legend=1 00:30:13.479 --rc geninfo_all_blocks=1 00:30:13.479 --rc geninfo_unexecuted_blocks=1 00:30:13.479 00:30:13.479 ' 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.479 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:13.480 11:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.626 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.626 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:21.626 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:21.626 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:21.626 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:21.626 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:21.626 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:21.626 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:21.626 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:21.627 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:21.627 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:21.627 Found net devices under 0000:31:00.0: cvl_0_0 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:21.627 Found net devices under 0000:31:00.1: cvl_0_1 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.627 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:21.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:30:21.889 00:30:21.889 --- 10.0.0.2 ping statistics --- 00:30:21.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.889 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:30:21.889 00:30:21.889 --- 10.0.0.1 ping statistics --- 00:30:21.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.889 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3635694 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3635694 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3635694 ']' 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.889 11:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:21.889 [2024-12-06 11:29:28.049613] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:21.889 [2024-12-06 11:29:28.050598] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:30:21.889 [2024-12-06 11:29:28.050633] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.152 [2024-12-06 11:29:28.152056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:22.152 [2024-12-06 11:29:28.187138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.152 [2024-12-06 11:29:28.187172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.152 [2024-12-06 11:29:28.187181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.152 [2024-12-06 11:29:28.187187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.152 [2024-12-06 11:29:28.187193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.152 [2024-12-06 11:29:28.188491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.152 [2024-12-06 11:29:28.188646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.152 [2024-12-06 11:29:28.188647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.152 [2024-12-06 11:29:28.244960] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:22.152 [2024-12-06 11:29:28.245013] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:22.152 [2024-12-06 11:29:28.245542] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:22.152 [2024-12-06 11:29:28.245897] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:22.152 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.152 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:22.152 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:22.152 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:22.152 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.152 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.152 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:22.152 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.152 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.152 [2024-12-06 11:29:28.317405] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.414 Malloc0 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.414 Delay0 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.414 [2024-12-06 11:29:28.409408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.414 11:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:22.414 [2024-12-06 11:29:28.493685] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:24.962 Initializing NVMe Controllers 00:30:24.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:24.962 controller IO queue size 128 less than required 00:30:24.962 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:24.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:24.962 Initialization complete. Launching workers. 00:30:24.962 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29035 00:30:24.962 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29092, failed to submit 66 00:30:24.962 success 29035, unsuccessful 57, failed 0 00:30:24.962 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:24.962 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.962 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:24.962 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.962 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:24.962 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:24.962 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.963 rmmod nvme_tcp 00:30:24.963 rmmod nvme_fabrics 00:30:24.963 rmmod nvme_keyring 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3635694 ']' 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3635694 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3635694 ']' 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3635694 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3635694 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3635694' 00:30:24.963 killing process with pid 3635694 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3635694 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3635694 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.963 11:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.878 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.878 00:30:26.878 real 0m13.660s 00:30:26.878 user 0m10.812s 00:30:26.878 sys 0m7.602s 00:30:26.878 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.878 11:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:26.878 ************************************ 00:30:26.878 END TEST nvmf_abort 00:30:26.878 ************************************ 00:30:26.878 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:26.878 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:26.878 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.878 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:27.138 ************************************ 00:30:27.138 START TEST nvmf_ns_hotplug_stress 00:30:27.138 ************************************ 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:27.138 * Looking for test storage... 00:30:27.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.138 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:27.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.138 --rc genhtml_branch_coverage=1 00:30:27.138 --rc genhtml_function_coverage=1 00:30:27.138 --rc genhtml_legend=1 00:30:27.138 --rc geninfo_all_blocks=1 00:30:27.138 --rc geninfo_unexecuted_blocks=1 00:30:27.138 00:30:27.139 ' 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:27.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.139 --rc genhtml_branch_coverage=1 00:30:27.139 --rc genhtml_function_coverage=1 00:30:27.139 --rc genhtml_legend=1 00:30:27.139 --rc geninfo_all_blocks=1 00:30:27.139 --rc geninfo_unexecuted_blocks=1 00:30:27.139 00:30:27.139 ' 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:27.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.139 --rc genhtml_branch_coverage=1 00:30:27.139 --rc genhtml_function_coverage=1 00:30:27.139 --rc genhtml_legend=1 00:30:27.139 --rc geninfo_all_blocks=1 00:30:27.139 --rc geninfo_unexecuted_blocks=1 00:30:27.139 00:30:27.139 ' 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:27.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.139 --rc genhtml_branch_coverage=1 00:30:27.139 --rc genhtml_function_coverage=1 00:30:27.139 --rc genhtml_legend=1 00:30:27.139 --rc geninfo_all_blocks=1 00:30:27.139 --rc geninfo_unexecuted_blocks=1 00:30:27.139 00:30:27.139 ' 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.139 11:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:35.326 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:35.326 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.326 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:35.327 Found net devices under 0000:31:00.0: cvl_0_0 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:35.327 Found net devices under 0000:31:00.1: cvl_0_1 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.327 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:30:35.589 00:30:35.589 --- 10.0.0.2 ping statistics --- 00:30:35.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.589 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:30:35.589 00:30:35.589 --- 10.0.0.1 ping statistics --- 00:30:35.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.589 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3640843 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3640843 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3640843 ']' 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:35.589 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:35.851 [2024-12-06 11:29:41.798736] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:35.851 [2024-12-06 11:29:41.799922] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:30:35.851 [2024-12-06 11:29:41.799977] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.851 [2024-12-06 11:29:41.911417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:35.851 [2024-12-06 11:29:41.963250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.851 [2024-12-06 11:29:41.963302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.851 [2024-12-06 11:29:41.963311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.851 [2024-12-06 11:29:41.963318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.851 [2024-12-06 11:29:41.963324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.851 [2024-12-06 11:29:41.965145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.851 [2024-12-06 11:29:41.965311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.851 [2024-12-06 11:29:41.965312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.112 [2024-12-06 11:29:42.042530] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:36.112 [2024-12-06 11:29:42.042614] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:36.112 [2024-12-06 11:29:42.043204] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:36.112 [2024-12-06 11:29:42.043501] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:36.684 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.684 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:36.684 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:36.684 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.684 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:36.684 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.684 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:36.684 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:36.684 [2024-12-06 11:29:42.770353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.684 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:36.945 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.206 [2024-12-06 11:29:43.151015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.206 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:37.206 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:37.468 Malloc0 00:30:37.468 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:37.730 Delay0 00:30:37.730 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.992 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:37.992 NULL1 00:30:37.992 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:38.254 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3641411 00:30:38.254 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:38.254 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.254 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:38.514 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.514 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:38.514 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:38.775 true 00:30:38.775 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:38.775 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.037 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.037 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:39.037 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:39.300 true 00:30:39.300 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:39.300 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.561 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.822 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:39.822 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:39.822 true 00:30:39.822 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:39.822 11:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.083 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.344 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:40.344 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:40.344 true 00:30:40.604 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:40.604 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.604 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.865 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:40.865 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:41.125 true 00:30:41.125 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:41.125 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.125 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.387 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:41.388 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:41.648 true 00:30:41.648 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:41.648 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.909 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.909 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:41.909 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:42.169 true 00:30:42.169 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:42.169 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.429 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.429 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:42.429 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:42.689 true 00:30:42.689 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:42.689 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.950 11:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.950 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:42.950 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:43.211 true 00:30:43.211 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:43.211 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.474 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.474 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:43.474 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:43.736 true 00:30:43.736 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:43.736 11:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.997 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.258 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:44.258 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:44.258 true 00:30:44.258 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:44.258 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.520 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.782 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:44.782 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:44.782 true 00:30:44.782 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:44.782 11:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.044 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.306 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:45.306 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:45.306 true 00:30:45.306 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:45.306 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.566 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.827 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:45.827 11:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:45.827 true 00:30:46.086 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:46.086 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.086 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.346 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:46.346 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:46.606 true 00:30:46.606 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:46.606 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.606 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.867 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:46.867 11:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:47.127 true 00:30:47.127 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:47.127 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.127 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.388 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:47.388 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:47.650 true 00:30:47.650 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:47.650 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.911 11:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.911 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:47.911 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:48.173 true 00:30:48.173 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:48.173 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.434 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.434 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:48.434 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:48.694 true 00:30:48.694 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:48.694 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.955 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.955 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:48.955 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:49.215 true 00:30:49.215 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:49.215 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.474 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.474 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:49.474 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:49.734 true 00:30:49.734 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:49.734 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.995 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.256 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:50.257 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:50.257 true 00:30:50.257 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:50.257 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.515 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.774 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:50.774 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:50.774 true 00:30:50.774 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:50.774 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.034 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.293 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:51.293 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:51.293 true 00:30:51.293 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:51.293 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.553 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.814 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:51.814 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:52.074 true 00:30:52.074 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:52.074 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.074 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.334 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:52.334 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:52.596 true 00:30:52.596 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:52.596 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.596 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.857 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:52.857 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:53.117 true 00:30:53.117 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:53.117 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.117 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.377 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:53.377 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:53.636 true 00:30:53.636 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:53.636 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.896 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.896 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:53.896 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:54.155 true 00:30:54.155 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:54.155 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.415 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.415 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:54.415 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:54.675 true 00:30:54.675 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:54.675 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.935 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.935 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:54.935 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:55.195 true 00:30:55.195 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:55.195 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.455 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.455 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:55.455 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:55.714 true 00:30:55.714 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:55.714 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.976 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.976 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:55.976 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:56.236 true 00:30:56.236 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:56.236 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.496 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.756 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:56.757 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:56.757 true 00:30:56.757 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:56.757 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.018 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.278 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:57.278 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:57.278 true 00:30:57.278 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:57.278 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.539 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.800 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:57.800 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:57.800 true 00:30:57.800 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:57.800 11:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.061 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.322 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:58.322 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:58.322 true 00:30:58.583 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:58.583 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.583 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.845 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:58.845 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:59.122 true 00:30:59.122 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:59.122 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.122 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.382 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:59.382 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:59.641 true 00:30:59.641 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:30:59.641 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.641 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.902 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:59.902 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:00.162 true 00:31:00.162 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:00.162 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.162 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.423 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:00.423 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:00.684 true 00:31:00.684 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:00.684 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.684 11:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.945 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:00.945 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:01.206 true 00:31:01.206 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:01.206 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.468 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.468 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:01.468 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:01.730 true 00:31:01.730 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:01.730 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.992 11:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.992 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:01.992 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:02.255 true 00:31:02.255 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:02.255 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.516 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.516 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:02.516 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:02.777 true 00:31:02.777 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:02.777 11:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.038 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.300 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:03.300 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:03.300 true 00:31:03.300 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:03.300 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.561 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.561 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:03.561 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:03.822 true 00:31:03.822 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:03.822 11:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.085 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.346 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:04.346 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:04.346 true 00:31:04.346 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:04.346 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.607 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.868 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:04.868 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:04.868 true 00:31:04.868 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:04.868 11:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.129 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.391 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:05.391 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:05.391 true 00:31:05.391 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:05.391 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.653 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.914 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:05.914 11:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:06.174 true 00:31:06.174 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:06.174 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.174 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.435 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:06.435 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:06.696 true 00:31:06.696 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:06.696 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.957 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.957 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:06.957 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:07.235 true 00:31:07.235 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:07.235 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.498 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.498 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:31:07.498 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:31:07.759 true 00:31:07.759 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:07.759 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.021 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.021 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:31:08.021 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:31:08.282 true 00:31:08.282 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:08.282 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.543 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.543 Initializing NVMe Controllers 00:31:08.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.543 Controller IO queue size 128, less than required. 00:31:08.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:08.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:08.543 Initialization complete. Launching workers. 00:31:08.543 ======================================================== 00:31:08.543 Latency(us) 00:31:08.543 Device Information : IOPS MiB/s Average min max 00:31:08.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29609.70 14.46 4322.75 1485.46 10796.62 00:31:08.543 ======================================================== 00:31:08.543 Total : 29609.70 14.46 4322.75 1485.46 10796.62 00:31:08.543 00:31:08.543 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:31:08.543 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:31:08.804 true 00:31:08.804 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3641411 00:31:08.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3641411) - No such process 00:31:08.804 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3641411 00:31:08.804 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.067 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:09.067 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:09.067 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:09.067 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:09.067 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:09.067 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:09.330 null0 00:31:09.330 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:09.330 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:09.330 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:09.589 null1 00:31:09.589 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:09.589 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:09.589 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:09.589 null2 00:31:09.589 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:09.589 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:09.589 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:09.848 null3 00:31:09.848 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:09.848 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:09.848 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:10.106 null4 00:31:10.106 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:10.106 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:10.106 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:10.106 null5 00:31:10.106 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:10.106 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:10.106 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:10.366 null6 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:10.366 null7 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:10.366 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3648145 3648146 3648147 3648148 3648149 3648151 3648153 3648155 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.367 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:10.627 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.627 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:10.627 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:10.627 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:10.627 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:10.627 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.887 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:10.887 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:10.888 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.150 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:11.411 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.674 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:11.936 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:11.936 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.936 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.936 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:11.936 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:11.936 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:11.936 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.936 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.936 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:11.936 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:11.936 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:11.936 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:11.936 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.936 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.936 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:11.936 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.936 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.936 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:11.936 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.198 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.199 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:12.199 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:12.199 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:12.199 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.199 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.199 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:12.199 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:12.199 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.199 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.199 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:12.199 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.460 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:12.722 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.723 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.723 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:12.723 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.723 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.723 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:12.723 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:12.985 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.985 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.985 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:12.985 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.985 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.985 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:12.985 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.985 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:13.247 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.248 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.248 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.509 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:13.771 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:14.031 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.031 11:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.031 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.291 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.551 rmmod nvme_tcp 00:31:14.551 rmmod nvme_fabrics 00:31:14.551 rmmod nvme_keyring 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3640843 ']' 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3640843 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3640843 ']' 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3640843 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3640843 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3640843' 00:31:14.551 killing process with pid 3640843 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3640843 00:31:14.551 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3640843 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.812 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.359 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.359 00:31:17.359 real 0m49.889s 00:31:17.359 user 3m3.706s 00:31:17.359 sys 0m23.124s 00:31:17.359 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.359 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:17.359 ************************************ 00:31:17.359 END TEST nvmf_ns_hotplug_stress 00:31:17.359 ************************************ 00:31:17.359 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:17.359 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:17.359 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.359 11:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:17.359 ************************************ 00:31:17.359 START TEST nvmf_delete_subsystem 00:31:17.359 ************************************ 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:17.359 * Looking for test storage... 00:31:17.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:17.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.359 --rc genhtml_branch_coverage=1 00:31:17.359 --rc genhtml_function_coverage=1 00:31:17.359 --rc genhtml_legend=1 00:31:17.359 --rc geninfo_all_blocks=1 00:31:17.359 --rc geninfo_unexecuted_blocks=1 00:31:17.359 00:31:17.359 ' 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:17.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.359 --rc genhtml_branch_coverage=1 00:31:17.359 --rc genhtml_function_coverage=1 00:31:17.359 --rc genhtml_legend=1 00:31:17.359 --rc geninfo_all_blocks=1 00:31:17.359 --rc geninfo_unexecuted_blocks=1 00:31:17.359 00:31:17.359 ' 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:17.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.359 --rc genhtml_branch_coverage=1 00:31:17.359 --rc genhtml_function_coverage=1 00:31:17.359 --rc genhtml_legend=1 00:31:17.359 --rc geninfo_all_blocks=1 00:31:17.359 --rc geninfo_unexecuted_blocks=1 00:31:17.359 00:31:17.359 ' 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:17.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.359 --rc genhtml_branch_coverage=1 00:31:17.359 --rc genhtml_function_coverage=1 00:31:17.359 --rc genhtml_legend=1 00:31:17.359 --rc geninfo_all_blocks=1 00:31:17.359 --rc geninfo_unexecuted_blocks=1 00:31:17.359 00:31:17.359 ' 00:31:17.359 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:17.360 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:25.498 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:25.498 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.498 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:25.499 Found net devices under 0000:31:00.0: cvl_0_0 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:25.499 Found net devices under 0000:31:00.1: cvl_0_1 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.499 11:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:25.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:31:25.499 00:31:25.499 --- 10.0.0.2 ping statistics --- 00:31:25.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.499 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:31:25.499 00:31:25.499 --- 10.0.0.1 ping statistics --- 00:31:25.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.499 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3653682 00:31:25.499 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3653682 00:31:25.500 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:25.500 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3653682 ']' 00:31:25.500 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.500 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:25.500 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.500 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:25.500 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:25.500 [2024-12-06 11:30:31.312629] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:25.500 [2024-12-06 11:30:31.313684] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:31:25.500 [2024-12-06 11:30:31.313725] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.500 [2024-12-06 11:30:31.408028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:25.500 [2024-12-06 11:30:31.444047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.500 [2024-12-06 11:30:31.444081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.500 [2024-12-06 11:30:31.444090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.500 [2024-12-06 11:30:31.444097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.500 [2024-12-06 11:30:31.444103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.500 [2024-12-06 11:30:31.445325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.500 [2024-12-06 11:30:31.445326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.500 [2024-12-06 11:30:31.501312] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:25.500 [2024-12-06 11:30:31.501911] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:25.500 [2024-12-06 11:30:31.502225] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.073 [2024-12-06 11:30:32.201917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.073 [2024-12-06 11:30:32.222733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.073 NULL1 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.073 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.334 Delay0 00:31:26.334 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.334 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.334 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.334 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.334 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.334 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3654008 00:31:26.334 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:26.334 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:26.334 [2024-12-06 11:30:32.307586] subsystem.c:1791:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:28.245 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:28.245 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.246 11:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.507 Read completed with error (sct=0, sc=8) 00:31:28.507 Write completed with error (sct=0, sc=8) 00:31:28.507 starting I/O failed: -6 00:31:28.507 Read completed with error (sct=0, sc=8) 00:31:28.507 Read completed with error (sct=0, sc=8) 00:31:28.507 Write completed with error (sct=0, sc=8) 00:31:28.507 Write completed with error (sct=0, sc=8) 00:31:28.507 starting I/O failed: -6 00:31:28.507 Write completed with error (sct=0, sc=8) 00:31:28.507 Read completed with error (sct=0, sc=8) 00:31:28.507 Read completed with error (sct=0, sc=8) 00:31:28.507 Read completed with error (sct=0, sc=8) 00:31:28.507 starting I/O failed: -6 00:31:28.507 Read completed with error (sct=0, sc=8) 00:31:28.507 Write completed with error (sct=0, sc=8) 00:31:28.507 Read completed with error (sct=0, sc=8) 00:31:28.507 Read completed with error (sct=0, sc=8) 00:31:28.507 starting I/O failed: -6 00:31:28.507 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 [2024-12-06 11:30:34.531682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb592c0 is same with the state(6) to be set 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 starting I/O failed: -6 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 [2024-12-06 11:30:34.534929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe4d800d4b0 is same with the state(6) to be set 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:28.508 Write completed with error (sct=0, sc=8) 00:31:28.508 Read completed with error (sct=0, sc=8) 00:31:29.463 [2024-12-06 11:30:35.489909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5a5f0 is same with the state(6) to be set 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 [2024-12-06 11:30:35.535796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb590e0 is same with the state(6) to be set 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 [2024-12-06 11:30:35.535949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb594a0 is same with the state(6) to be set 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 [2024-12-06 11:30:35.537548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe4d800d020 is same with the state(6) to be set 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Write completed with error (sct=0, sc=8) 00:31:29.463 Read completed with error (sct=0, sc=8) 00:31:29.464 Read completed with error (sct=0, sc=8) 00:31:29.464 Read completed with error (sct=0, sc=8) 00:31:29.464 [2024-12-06 11:30:35.537643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe4d800d7e0 is same with the state(6) to be set 00:31:29.464 Initializing NVMe Controllers 00:31:29.464 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:29.464 Controller IO queue size 128, less than required. 00:31:29.464 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:29.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:29.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:29.464 Initialization complete. Launching workers. 00:31:29.464 ======================================================== 00:31:29.464 Latency(us) 00:31:29.464 Device Information : IOPS MiB/s Average min max 00:31:29.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.77 0.08 904517.85 235.75 1007925.73 00:31:29.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.83 0.08 933650.01 262.88 1010133.52 00:31:29.464 ======================================================== 00:31:29.464 Total : 319.60 0.16 918539.41 235.75 1010133.52 00:31:29.464 00:31:29.464 [2024-12-06 11:30:35.538284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5a5f0 (9): Bad file descriptor 00:31:29.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:29.464 11:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.464 11:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:29.464 11:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3654008 00:31:29.464 11:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3654008 00:31:30.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3654008) - No such process 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3654008 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3654008 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3654008 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:30.036 [2024-12-06 11:30:36.066224] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3654684 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3654684 00:31:30.036 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:30.036 [2024-12-06 11:30:36.135935] subsystem.c:1791:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:30.609 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:30.609 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3654684 00:31:30.609 11:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:31.181 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:31.181 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3654684 00:31:31.181 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:31.442 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:31.442 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3654684 00:31:31.442 11:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:32.015 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:32.015 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3654684 00:31:32.015 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:32.601 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:32.601 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3654684 00:31:32.601 11:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:33.225 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:33.225 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3654684 00:31:33.225 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:33.520 Initializing NVMe Controllers 00:31:33.520 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:33.520 Controller IO queue size 128, less than required. 00:31:33.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:33.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:33.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:33.520 Initialization complete. Launching workers. 00:31:33.520 ======================================================== 00:31:33.520 Latency(us) 00:31:33.520 Device Information : IOPS MiB/s Average min max 00:31:33.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002183.37 1000319.59 1006044.20 00:31:33.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004887.02 1000249.89 1043567.14 00:31:33.520 ======================================================== 00:31:33.520 Total : 256.00 0.12 1003535.20 1000249.89 1043567.14 00:31:33.520 00:31:33.520 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:33.520 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3654684 00:31:33.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3654684) - No such process 00:31:33.520 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3654684 00:31:33.520 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:33.520 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:33.520 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:33.520 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:33.520 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:33.520 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:33.520 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:33.520 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:33.520 rmmod nvme_tcp 00:31:33.520 rmmod nvme_fabrics 00:31:33.520 rmmod nvme_keyring 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3653682 ']' 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3653682 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3653682 ']' 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3653682 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3653682 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3653682' 00:31:33.783 killing process with pid 3653682 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3653682 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3653682 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.783 11:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.332 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:36.332 00:31:36.332 real 0m18.938s 00:31:36.332 user 0m26.850s 00:31:36.332 sys 0m7.905s 00:31:36.332 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:36.332 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:36.332 ************************************ 00:31:36.332 END TEST nvmf_delete_subsystem 00:31:36.332 ************************************ 00:31:36.332 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:36.333 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:36.333 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.333 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:36.333 ************************************ 00:31:36.333 START TEST nvmf_host_management 00:31:36.333 ************************************ 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:36.333 * Looking for test storage... 00:31:36.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:36.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.333 --rc genhtml_branch_coverage=1 00:31:36.333 --rc genhtml_function_coverage=1 00:31:36.333 --rc genhtml_legend=1 00:31:36.333 --rc geninfo_all_blocks=1 00:31:36.333 --rc geninfo_unexecuted_blocks=1 00:31:36.333 00:31:36.333 ' 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:36.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.333 --rc genhtml_branch_coverage=1 00:31:36.333 --rc genhtml_function_coverage=1 00:31:36.333 --rc genhtml_legend=1 00:31:36.333 --rc geninfo_all_blocks=1 00:31:36.333 --rc geninfo_unexecuted_blocks=1 00:31:36.333 00:31:36.333 ' 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:36.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.333 --rc genhtml_branch_coverage=1 00:31:36.333 --rc genhtml_function_coverage=1 00:31:36.333 --rc genhtml_legend=1 00:31:36.333 --rc geninfo_all_blocks=1 00:31:36.333 --rc geninfo_unexecuted_blocks=1 00:31:36.333 00:31:36.333 ' 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:36.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.333 --rc genhtml_branch_coverage=1 00:31:36.333 --rc genhtml_function_coverage=1 00:31:36.333 --rc genhtml_legend=1 00:31:36.333 --rc geninfo_all_blocks=1 00:31:36.333 --rc geninfo_unexecuted_blocks=1 00:31:36.333 00:31:36.333 ' 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.333 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:36.334 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.476 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:44.477 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:44.477 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:44.477 Found net devices under 0000:31:00.0: cvl_0_0 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:44.477 Found net devices under 0000:31:00.1: cvl_0_1 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:44.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:31:44.477 00:31:44.477 --- 10.0.0.2 ping statistics --- 00:31:44.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.477 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:31:44.477 00:31:44.477 --- 10.0.0.1 ping statistics --- 00:31:44.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.477 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3660047 00:31:44.477 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3660047 00:31:44.478 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:44.478 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3660047 ']' 00:31:44.478 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.478 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:44.478 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.478 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:44.478 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:44.478 [2024-12-06 11:30:50.568493] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:44.478 [2024-12-06 11:30:50.569221] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:31:44.478 [2024-12-06 11:30:50.569254] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.738 [2024-12-06 11:30:50.662588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:44.738 [2024-12-06 11:30:50.702760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.738 [2024-12-06 11:30:50.702795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.738 [2024-12-06 11:30:50.702803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.738 [2024-12-06 11:30:50.702810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.738 [2024-12-06 11:30:50.702816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.738 [2024-12-06 11:30:50.704432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.738 [2024-12-06 11:30:50.704589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:44.738 [2024-12-06 11:30:50.704746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.738 [2024-12-06 11:30:50.704748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:44.738 [2024-12-06 11:30:50.768666] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:44.738 [2024-12-06 11:30:50.769241] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:44.738 [2024-12-06 11:30:50.770144] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:44.738 [2024-12-06 11:30:50.770162] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:44.738 [2024-12-06 11:30:50.770347] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:45.310 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:45.310 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:45.310 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:45.310 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:45.310 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:45.310 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.310 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:45.310 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.310 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:45.310 [2024-12-06 11:30:51.453557] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:45.572 Malloc0 00:31:45.572 [2024-12-06 11:30:51.549778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3660331 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3660331 /var/tmp/bdevperf.sock 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3660331 ']' 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:45.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:45.572 { 00:31:45.572 "params": { 00:31:45.572 "name": "Nvme$subsystem", 00:31:45.572 "trtype": "$TEST_TRANSPORT", 00:31:45.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:45.572 "adrfam": "ipv4", 00:31:45.572 "trsvcid": "$NVMF_PORT", 00:31:45.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:45.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:45.572 "hdgst": ${hdgst:-false}, 00:31:45.572 "ddgst": ${ddgst:-false} 00:31:45.572 }, 00:31:45.572 "method": "bdev_nvme_attach_controller" 00:31:45.572 } 00:31:45.572 EOF 00:31:45.572 )") 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:45.572 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:45.572 "params": { 00:31:45.572 "name": "Nvme0", 00:31:45.572 "trtype": "tcp", 00:31:45.572 "traddr": "10.0.0.2", 00:31:45.572 "adrfam": "ipv4", 00:31:45.572 "trsvcid": "4420", 00:31:45.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.572 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:45.572 "hdgst": false, 00:31:45.572 "ddgst": false 00:31:45.572 }, 00:31:45.572 "method": "bdev_nvme_attach_controller" 00:31:45.572 }' 00:31:45.572 [2024-12-06 11:30:51.656539] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:31:45.572 [2024-12-06 11:30:51.656597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660331 ] 00:31:45.833 [2024-12-06 11:30:51.737938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.833 [2024-12-06 11:30:51.774099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.094 Running I/O for 10 seconds... 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=635 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 635 -ge 100 ']' 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.361 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:46.626 [2024-12-06 11:30:52.529182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.529494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b720 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.531187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.626 [2024-12-06 11:30:52.531226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.626 [2024-12-06 11:30:52.531237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.626 [2024-12-06 11:30:52.531246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.626 [2024-12-06 11:30:52.531254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.626 [2024-12-06 11:30:52.531262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.626 [2024-12-06 11:30:52.531270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.626 [2024-12-06 11:30:52.531278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.626 [2024-12-06 11:30:52.531286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9cb10 is same with the state(6) to be set 00:31:46.626 [2024-12-06 11:30:52.531621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.626 [2024-12-06 11:30:52.531646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.626 [2024-12-06 11:30:52.531662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.531983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.531991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.627 [2024-12-06 11:30:52.532404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.627 [2024-12-06 11:30:52.532414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.532736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.628 [2024-12-06 11:30:52.532744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.533989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:46.628 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.628 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:46.628 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.628 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:46.628 task offset: 92160 on job bdev=Nvme0n1 fails 00:31:46.628 00:31:46.628 Latency(us) 00:31:46.628 [2024-12-06T10:30:52.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.628 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.628 Job: Nvme0n1 ended in about 0.45 seconds with error 00:31:46.628 Verification LBA range: start 0x0 length 0x400 00:31:46.628 Nvme0n1 : 0.45 1554.89 97.18 141.35 0.00 36640.10 2088.96 34078.72 00:31:46.628 [2024-12-06T10:30:52.795Z] =================================================================================================================== 00:31:46.628 [2024-12-06T10:30:52.795Z] Total : 1554.89 97.18 141.35 0.00 36640.10 2088.96 34078.72 00:31:46.628 [2024-12-06 11:30:52.535982] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:46.628 [2024-12-06 11:30:52.536006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9cb10 (9): Bad file descriptor 00:31:46.628 [2024-12-06 11:30:52.537215] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:46.628 [2024-12-06 11:30:52.537288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:46.628 [2024-12-06 11:30:52.537309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.628 [2024-12-06 11:30:52.537324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:46.628 [2024-12-06 11:30:52.537332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:46.628 [2024-12-06 11:30:52.537342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:46.628 [2024-12-06 11:30:52.537350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c9cb10 00:31:46.628 [2024-12-06 11:30:52.537370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9cb10 (9): Bad file descriptor 00:31:46.628 [2024-12-06 11:30:52.537382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:46.628 [2024-12-06 11:30:52.537389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:46.628 [2024-12-06 11:30:52.537399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:46.628 [2024-12-06 11:30:52.537408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:46.628 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.628 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3660331 00:31:47.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3660331) - No such process 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:47.571 { 00:31:47.571 "params": { 00:31:47.571 "name": "Nvme$subsystem", 00:31:47.571 "trtype": "$TEST_TRANSPORT", 00:31:47.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.571 "adrfam": "ipv4", 00:31:47.571 "trsvcid": "$NVMF_PORT", 00:31:47.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.571 "hdgst": ${hdgst:-false}, 00:31:47.571 "ddgst": ${ddgst:-false} 00:31:47.571 }, 00:31:47.571 "method": "bdev_nvme_attach_controller" 00:31:47.571 } 00:31:47.571 EOF 00:31:47.571 )") 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:47.571 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:47.571 "params": { 00:31:47.571 "name": "Nvme0", 00:31:47.571 "trtype": "tcp", 00:31:47.571 "traddr": "10.0.0.2", 00:31:47.571 "adrfam": "ipv4", 00:31:47.571 "trsvcid": "4420", 00:31:47.571 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.571 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:47.571 "hdgst": false, 00:31:47.571 "ddgst": false 00:31:47.571 }, 00:31:47.571 "method": "bdev_nvme_attach_controller" 00:31:47.571 }' 00:31:47.571 [2024-12-06 11:30:53.607573] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:31:47.571 [2024-12-06 11:30:53.607631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660757 ] 00:31:47.571 [2024-12-06 11:30:53.684349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.571 [2024-12-06 11:30:53.719746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.832 Running I/O for 1 seconds... 00:31:48.777 1866.00 IOPS, 116.62 MiB/s 00:31:48.777 Latency(us) 00:31:48.777 [2024-12-06T10:30:54.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.777 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:48.777 Verification LBA range: start 0x0 length 0x400 00:31:48.777 Nvme0n1 : 1.01 1915.47 119.72 0.00 0.00 32758.25 1713.49 32112.64 00:31:48.777 [2024-12-06T10:30:54.944Z] =================================================================================================================== 00:31:48.777 [2024-12-06T10:30:54.944Z] Total : 1915.47 119.72 0.00 0.00 32758.25 1713.49 32112.64 00:31:49.038 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:49.038 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:49.038 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:49.038 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:49.038 rmmod nvme_tcp 00:31:49.038 rmmod nvme_fabrics 00:31:49.038 rmmod nvme_keyring 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3660047 ']' 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3660047 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3660047 ']' 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3660047 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3660047 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3660047' 00:31:49.038 killing process with pid 3660047 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3660047 00:31:49.038 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3660047 00:31:49.300 [2024-12-06 11:30:55.247167] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.300 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.213 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:51.213 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:51.213 00:31:51.213 real 0m15.322s 00:31:51.213 user 0m19.021s 00:31:51.213 sys 0m7.852s 00:31:51.213 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:51.213 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:51.213 ************************************ 00:31:51.213 END TEST nvmf_host_management 00:31:51.213 ************************************ 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:51.475 ************************************ 00:31:51.475 START TEST nvmf_lvol 00:31:51.475 ************************************ 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:51.475 * Looking for test storage... 00:31:51.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:51.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.475 --rc genhtml_branch_coverage=1 00:31:51.475 --rc genhtml_function_coverage=1 00:31:51.475 --rc genhtml_legend=1 00:31:51.475 --rc geninfo_all_blocks=1 00:31:51.475 --rc geninfo_unexecuted_blocks=1 00:31:51.475 00:31:51.475 ' 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:51.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.475 --rc genhtml_branch_coverage=1 00:31:51.475 --rc genhtml_function_coverage=1 00:31:51.475 --rc genhtml_legend=1 00:31:51.475 --rc geninfo_all_blocks=1 00:31:51.475 --rc geninfo_unexecuted_blocks=1 00:31:51.475 00:31:51.475 ' 00:31:51.475 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:51.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.475 --rc genhtml_branch_coverage=1 00:31:51.475 --rc genhtml_function_coverage=1 00:31:51.475 --rc genhtml_legend=1 00:31:51.475 --rc geninfo_all_blocks=1 00:31:51.475 --rc geninfo_unexecuted_blocks=1 00:31:51.475 00:31:51.475 ' 00:31:51.476 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:51.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.476 --rc genhtml_branch_coverage=1 00:31:51.476 --rc genhtml_function_coverage=1 00:31:51.476 --rc genhtml_legend=1 00:31:51.476 --rc geninfo_all_blocks=1 00:31:51.476 --rc geninfo_unexecuted_blocks=1 00:31:51.476 00:31:51.476 ' 00:31:51.476 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:51.476 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:51.476 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:51.476 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:51.476 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:51.476 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:51.476 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:51.476 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:51.476 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:51.738 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:51.739 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:59.885 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:59.885 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:59.885 Found net devices under 0000:31:00.0: cvl_0_0 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:59.885 Found net devices under 0000:31:00.1: cvl_0_1 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.885 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:31:59.886 00:31:59.886 --- 10.0.0.2 ping statistics --- 00:31:59.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.886 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:31:59.886 00:31:59.886 --- 10.0.0.1 ping statistics --- 00:31:59.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.886 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3665613 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3665613 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3665613 ']' 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.886 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:59.886 [2024-12-06 11:31:06.009191] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.886 [2024-12-06 11:31:06.010369] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:31:59.886 [2024-12-06 11:31:06.010425] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.148 [2024-12-06 11:31:06.103507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:00.148 [2024-12-06 11:31:06.144301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.148 [2024-12-06 11:31:06.144337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.148 [2024-12-06 11:31:06.144345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.148 [2024-12-06 11:31:06.144352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.148 [2024-12-06 11:31:06.144358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.148 [2024-12-06 11:31:06.145925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.148 [2024-12-06 11:31:06.145985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.148 [2024-12-06 11:31:06.145987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.148 [2024-12-06 11:31:06.203041] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.148 [2024-12-06 11:31:06.203528] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:00.148 [2024-12-06 11:31:06.203852] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:00.148 [2024-12-06 11:31:06.204095] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.720 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.720 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:00.720 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.720 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.720 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:00.720 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.720 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:00.981 [2024-12-06 11:31:06.998702] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.981 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.241 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:01.241 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.241 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:01.242 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:01.503 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:01.764 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c39d43bb-73a0-470c-af33-6beda3d72650 00:32:01.764 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c39d43bb-73a0-470c-af33-6beda3d72650 lvol 20 00:32:02.025 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a0a41526-9d41-45b9-8ccb-a29e03cd9dda 00:32:02.025 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:02.025 11:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a0a41526-9d41-45b9-8ccb-a29e03cd9dda 00:32:02.292 11:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:02.292 [2024-12-06 11:31:08.414843] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.292 11:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:02.553 11:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3666160 00:32:02.553 11:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:02.553 11:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:03.496 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a0a41526-9d41-45b9-8ccb-a29e03cd9dda MY_SNAPSHOT 00:32:03.757 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6426af33-9208-4401-b9d2-d3939f0aa295 00:32:03.757 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a0a41526-9d41-45b9-8ccb-a29e03cd9dda 30 00:32:04.018 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6426af33-9208-4401-b9d2-d3939f0aa295 MY_CLONE 00:32:04.279 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=922aa364-bce2-4fa3-a415-dcfefdc43810 00:32:04.279 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 922aa364-bce2-4fa3-a415-dcfefdc43810 00:32:04.850 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3666160 00:32:12.982 Initializing NVMe Controllers 00:32:12.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:12.982 Controller IO queue size 128, less than required. 00:32:12.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:12.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:12.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:12.982 Initialization complete. Launching workers. 00:32:12.982 ======================================================== 00:32:12.982 Latency(us) 00:32:12.982 Device Information : IOPS MiB/s Average min max 00:32:12.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12427.60 48.55 10301.56 1598.65 59533.42 00:32:12.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15615.30 61.00 8197.83 2462.36 54685.26 00:32:12.982 ======================================================== 00:32:12.982 Total : 28042.90 109.54 9130.13 1598.65 59533.42 00:32:12.982 00:32:12.982 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:12.982 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a0a41526-9d41-45b9-8ccb-a29e03cd9dda 00:32:13.243 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c39d43bb-73a0-470c-af33-6beda3d72650 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:13.503 rmmod nvme_tcp 00:32:13.503 rmmod nvme_fabrics 00:32:13.503 rmmod nvme_keyring 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3665613 ']' 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3665613 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3665613 ']' 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3665613 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3665613 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3665613' 00:32:13.503 killing process with pid 3665613 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3665613 00:32:13.503 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3665613 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.763 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.675 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:15.675 00:32:15.675 real 0m24.367s 00:32:15.675 user 0m55.714s 00:32:15.675 sys 0m11.103s 00:32:15.675 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.675 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:15.675 ************************************ 00:32:15.675 END TEST nvmf_lvol 00:32:15.675 ************************************ 00:32:15.937 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:15.937 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:15.937 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.937 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:15.937 ************************************ 00:32:15.937 START TEST nvmf_lvs_grow 00:32:15.937 ************************************ 00:32:15.937 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:15.937 * Looking for test storage... 00:32:15.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:15.937 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:15.937 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:32:15.937 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:15.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.937 --rc genhtml_branch_coverage=1 00:32:15.937 --rc genhtml_function_coverage=1 00:32:15.937 --rc genhtml_legend=1 00:32:15.937 --rc geninfo_all_blocks=1 00:32:15.937 --rc geninfo_unexecuted_blocks=1 00:32:15.937 00:32:15.937 ' 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:15.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.937 --rc genhtml_branch_coverage=1 00:32:15.937 --rc genhtml_function_coverage=1 00:32:15.937 --rc genhtml_legend=1 00:32:15.937 --rc geninfo_all_blocks=1 00:32:15.937 --rc geninfo_unexecuted_blocks=1 00:32:15.937 00:32:15.937 ' 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:15.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.937 --rc genhtml_branch_coverage=1 00:32:15.937 --rc genhtml_function_coverage=1 00:32:15.937 --rc genhtml_legend=1 00:32:15.937 --rc geninfo_all_blocks=1 00:32:15.937 --rc geninfo_unexecuted_blocks=1 00:32:15.937 00:32:15.937 ' 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:15.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.937 --rc genhtml_branch_coverage=1 00:32:15.937 --rc genhtml_function_coverage=1 00:32:15.937 --rc genhtml_legend=1 00:32:15.937 --rc geninfo_all_blocks=1 00:32:15.937 --rc geninfo_unexecuted_blocks=1 00:32:15.937 00:32:15.937 ' 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.937 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:15.938 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.938 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.938 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.938 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.938 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.938 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.938 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.938 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.938 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.938 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.200 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:16.201 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:24.416 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:24.416 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:24.416 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:24.417 Found net devices under 0000:31:00.0: cvl_0_0 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:24.417 Found net devices under 0000:31:00.1: cvl_0_1 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:24.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:24.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:32:24.417 00:32:24.417 --- 10.0.0.2 ping statistics --- 00:32:24.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.417 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:24.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:24.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:32:24.417 00:32:24.417 --- 10.0.0.1 ping statistics --- 00:32:24.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.417 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3672869 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3672869 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3672869 ']' 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.417 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:24.417 [2024-12-06 11:31:30.535651] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:24.417 [2024-12-06 11:31:30.536683] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:32:24.417 [2024-12-06 11:31:30.536721] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.680 [2024-12-06 11:31:30.625529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.680 [2024-12-06 11:31:30.662460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.680 [2024-12-06 11:31:30.662496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.680 [2024-12-06 11:31:30.662505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.680 [2024-12-06 11:31:30.662511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.680 [2024-12-06 11:31:30.662517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.680 [2024-12-06 11:31:30.663070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.680 [2024-12-06 11:31:30.718857] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:24.680 [2024-12-06 11:31:30.719111] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:25.254 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.254 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:25.254 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:25.254 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.254 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:25.254 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.254 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:25.516 [2024-12-06 11:31:31.535838] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:25.516 ************************************ 00:32:25.516 START TEST lvs_grow_clean 00:32:25.516 ************************************ 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:25.516 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:25.778 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:25.778 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:26.041 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b0b45304-4675-4210-9210-262a328b73b6 00:32:26.041 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0b45304-4675-4210-9210-262a328b73b6 00:32:26.041 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:26.041 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:26.041 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:26.041 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b0b45304-4675-4210-9210-262a328b73b6 lvol 150 00:32:26.302 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=da4681e3-4295-4b5b-bad5-40d5d42a6ee6 00:32:26.302 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:26.302 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:26.302 [2024-12-06 11:31:32.463532] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:26.302 [2024-12-06 11:31:32.463680] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:26.563 true 00:32:26.563 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0b45304-4675-4210-9210-262a328b73b6 00:32:26.563 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:26.563 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:26.563 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:26.824 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 da4681e3-4295-4b5b-bad5-40d5d42a6ee6 00:32:27.085 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:27.085 [2024-12-06 11:31:33.215915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.085 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:27.347 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:27.347 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3673571 00:32:27.347 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:27.347 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3673571 /var/tmp/bdevperf.sock 00:32:27.347 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3673571 ']' 00:32:27.347 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:27.347 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.347 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:27.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:27.347 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.347 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:27.347 [2024-12-06 11:31:33.458989] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:32:27.347 [2024-12-06 11:31:33.459069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673571 ] 00:32:27.608 [2024-12-06 11:31:33.559866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.608 [2024-12-06 11:31:33.610377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.181 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.181 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:28.181 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:28.441 Nvme0n1 00:32:28.703 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:28.703 [ 00:32:28.703 { 00:32:28.703 "name": "Nvme0n1", 00:32:28.703 "aliases": [ 00:32:28.703 "da4681e3-4295-4b5b-bad5-40d5d42a6ee6" 00:32:28.703 ], 00:32:28.703 "product_name": "NVMe disk", 00:32:28.703 "block_size": 4096, 00:32:28.703 "num_blocks": 38912, 00:32:28.703 "uuid": "da4681e3-4295-4b5b-bad5-40d5d42a6ee6", 00:32:28.703 "numa_id": 0, 00:32:28.703 "assigned_rate_limits": { 00:32:28.703 "rw_ios_per_sec": 0, 00:32:28.703 "rw_mbytes_per_sec": 0, 00:32:28.703 "r_mbytes_per_sec": 0, 00:32:28.703 "w_mbytes_per_sec": 0 00:32:28.703 }, 00:32:28.703 "claimed": false, 00:32:28.703 "zoned": false, 00:32:28.703 "supported_io_types": { 00:32:28.703 "read": true, 00:32:28.703 "write": true, 00:32:28.703 "unmap": true, 00:32:28.703 "flush": true, 00:32:28.703 "reset": true, 00:32:28.703 "nvme_admin": true, 00:32:28.703 "nvme_io": true, 00:32:28.703 "nvme_io_md": false, 00:32:28.703 "write_zeroes": true, 00:32:28.703 "zcopy": false, 00:32:28.703 "get_zone_info": false, 00:32:28.703 "zone_management": false, 00:32:28.703 "zone_append": false, 00:32:28.703 "compare": true, 00:32:28.703 "compare_and_write": true, 00:32:28.703 "abort": true, 00:32:28.703 "seek_hole": false, 00:32:28.703 "seek_data": false, 00:32:28.703 "copy": true, 00:32:28.703 "nvme_iov_md": false 00:32:28.703 }, 00:32:28.703 "memory_domains": [ 00:32:28.703 { 00:32:28.703 "dma_device_id": "system", 00:32:28.703 "dma_device_type": 1 00:32:28.703 } 00:32:28.703 ], 00:32:28.703 "driver_specific": { 00:32:28.703 "nvme": [ 00:32:28.703 { 00:32:28.703 "trid": { 00:32:28.703 "trtype": "TCP", 00:32:28.703 "adrfam": "IPv4", 00:32:28.703 "traddr": "10.0.0.2", 00:32:28.703 "trsvcid": "4420", 00:32:28.703 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:28.703 }, 00:32:28.703 "ctrlr_data": { 00:32:28.703 "cntlid": 1, 00:32:28.703 "vendor_id": "0x8086", 00:32:28.703 "model_number": "SPDK bdev Controller", 00:32:28.703 "serial_number": "SPDK0", 00:32:28.703 "firmware_revision": "25.01", 00:32:28.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:28.703 "oacs": { 00:32:28.703 "security": 0, 00:32:28.703 "format": 0, 00:32:28.703 "firmware": 0, 00:32:28.703 "ns_manage": 0 00:32:28.703 }, 00:32:28.703 "multi_ctrlr": true, 00:32:28.703 "ana_reporting": false 00:32:28.703 }, 00:32:28.703 "vs": { 00:32:28.703 "nvme_version": "1.3" 00:32:28.703 }, 00:32:28.703 "ns_data": { 00:32:28.703 "id": 1, 00:32:28.703 "can_share": true 00:32:28.703 } 00:32:28.703 } 00:32:28.703 ], 00:32:28.703 "mp_policy": "active_passive" 00:32:28.703 } 00:32:28.703 } 00:32:28.703 ] 00:32:28.703 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3673740 00:32:28.703 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:28.703 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:28.963 Running I/O for 10 seconds... 00:32:29.902 Latency(us) 00:32:29.902 [2024-12-06T10:31:36.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.902 Nvme0n1 : 1.00 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:32:29.902 [2024-12-06T10:31:36.069Z] =================================================================================================================== 00:32:29.902 [2024-12-06T10:31:36.069Z] Total : 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:32:29.902 00:32:30.841 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b0b45304-4675-4210-9210-262a328b73b6 00:32:30.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:30.841 Nvme0n1 : 2.00 17817.00 69.60 0.00 0.00 0.00 0.00 0.00 00:32:30.841 [2024-12-06T10:31:37.008Z] =================================================================================================================== 00:32:30.841 [2024-12-06T10:31:37.008Z] Total : 17817.00 69.60 0.00 0.00 0.00 0.00 0.00 00:32:30.841 00:32:30.841 true 00:32:30.841 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0b45304-4675-4210-9210-262a328b73b6 00:32:31.101 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:31.101 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:31.101 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:31.101 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3673740 00:32:32.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.040 Nvme0n1 : 3.00 17884.33 69.86 0.00 0.00 0.00 0.00 0.00 00:32:32.040 [2024-12-06T10:31:38.207Z] =================================================================================================================== 00:32:32.040 [2024-12-06T10:31:38.207Z] Total : 17884.33 69.86 0.00 0.00 0.00 0.00 0.00 00:32:32.040 00:32:32.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.981 Nvme0n1 : 4.00 17921.75 70.01 0.00 0.00 0.00 0.00 0.00 00:32:32.981 [2024-12-06T10:31:39.148Z] =================================================================================================================== 00:32:32.981 [2024-12-06T10:31:39.148Z] Total : 17921.75 70.01 0.00 0.00 0.00 0.00 0.00 00:32:32.981 00:32:33.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:33.922 Nvme0n1 : 5.00 17944.20 70.09 0.00 0.00 0.00 0.00 0.00 00:32:33.922 [2024-12-06T10:31:40.089Z] =================================================================================================================== 00:32:33.922 [2024-12-06T10:31:40.089Z] Total : 17944.20 70.09 0.00 0.00 0.00 0.00 0.00 00:32:33.922 00:32:34.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:34.863 Nvme0n1 : 6.00 17959.17 70.15 0.00 0.00 0.00 0.00 0.00 00:32:34.863 [2024-12-06T10:31:41.030Z] =================================================================================================================== 00:32:34.863 [2024-12-06T10:31:41.030Z] Total : 17959.17 70.15 0.00 0.00 0.00 0.00 0.00 00:32:34.863 00:32:35.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:35.805 Nvme0n1 : 7.00 17988.00 70.27 0.00 0.00 0.00 0.00 0.00 00:32:35.805 [2024-12-06T10:31:41.973Z] =================================================================================================================== 00:32:35.806 [2024-12-06T10:31:41.973Z] Total : 17988.00 70.27 0.00 0.00 0.00 0.00 0.00 00:32:35.806 00:32:37.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.190 Nvme0n1 : 8.00 17995.88 70.30 0.00 0.00 0.00 0.00 0.00 00:32:37.190 [2024-12-06T10:31:43.357Z] =================================================================================================================== 00:32:37.190 [2024-12-06T10:31:43.357Z] Total : 17995.88 70.30 0.00 0.00 0.00 0.00 0.00 00:32:37.190 00:32:38.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.129 Nvme0n1 : 9.00 18014.22 70.37 0.00 0.00 0.00 0.00 0.00 00:32:38.129 [2024-12-06T10:31:44.296Z] =================================================================================================================== 00:32:38.129 [2024-12-06T10:31:44.296Z] Total : 18014.22 70.37 0.00 0.00 0.00 0.00 0.00 00:32:38.129 00:32:39.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.070 Nvme0n1 : 10.00 18028.90 70.43 0.00 0.00 0.00 0.00 0.00 00:32:39.070 [2024-12-06T10:31:45.237Z] =================================================================================================================== 00:32:39.070 [2024-12-06T10:31:45.237Z] Total : 18028.90 70.43 0.00 0.00 0.00 0.00 0.00 00:32:39.070 00:32:39.070 00:32:39.070 Latency(us) 00:32:39.070 [2024-12-06T10:31:45.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.070 Nvme0n1 : 10.00 18027.16 70.42 0.00 0.00 7097.08 2375.68 13926.40 00:32:39.071 [2024-12-06T10:31:45.238Z] =================================================================================================================== 00:32:39.071 [2024-12-06T10:31:45.238Z] Total : 18027.16 70.42 0.00 0.00 7097.08 2375.68 13926.40 00:32:39.071 { 00:32:39.071 "results": [ 00:32:39.071 { 00:32:39.071 "job": "Nvme0n1", 00:32:39.071 "core_mask": "0x2", 00:32:39.071 "workload": "randwrite", 00:32:39.071 "status": "finished", 00:32:39.071 "queue_depth": 128, 00:32:39.071 "io_size": 4096, 00:32:39.071 "runtime": 10.004571, 00:32:39.071 "iops": 18027.159785262156, 00:32:39.071 "mibps": 70.4185929111803, 00:32:39.071 "io_failed": 0, 00:32:39.071 "io_timeout": 0, 00:32:39.071 "avg_latency_us": 7097.084713544844, 00:32:39.071 "min_latency_us": 2375.68, 00:32:39.071 "max_latency_us": 13926.4 00:32:39.071 } 00:32:39.071 ], 00:32:39.071 "core_count": 1 00:32:39.071 } 00:32:39.071 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3673571 00:32:39.071 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3673571 ']' 00:32:39.071 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3673571 00:32:39.071 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:39.071 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.071 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3673571 00:32:39.071 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:39.071 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:39.071 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3673571' 00:32:39.071 killing process with pid 3673571 00:32:39.071 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3673571 00:32:39.071 Received shutdown signal, test time was about 10.000000 seconds 00:32:39.071 00:32:39.071 Latency(us) 00:32:39.071 [2024-12-06T10:31:45.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.071 [2024-12-06T10:31:45.238Z] =================================================================================================================== 00:32:39.071 [2024-12-06T10:31:45.238Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:39.071 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3673571 00:32:39.071 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:39.331 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:39.590 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:39.590 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0b45304-4675-4210-9210-262a328b73b6 00:32:39.590 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:39.590 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:39.590 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:39.850 [2024-12-06 11:31:45.827448] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0b45304-4675-4210-9210-262a328b73b6 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0b45304-4675-4210-9210-262a328b73b6 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:39.850 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0b45304-4675-4210-9210-262a328b73b6 00:32:40.110 request: 00:32:40.110 { 00:32:40.110 "uuid": "b0b45304-4675-4210-9210-262a328b73b6", 00:32:40.110 "method": "bdev_lvol_get_lvstores", 00:32:40.110 "req_id": 1 00:32:40.110 } 00:32:40.110 Got JSON-RPC error response 00:32:40.110 response: 00:32:40.110 { 00:32:40.110 "code": -19, 00:32:40.110 "message": "No such device" 00:32:40.110 } 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:40.110 aio_bdev 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev da4681e3-4295-4b5b-bad5-40d5d42a6ee6 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=da4681e3-4295-4b5b-bad5-40d5d42a6ee6 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:40.110 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:40.371 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b da4681e3-4295-4b5b-bad5-40d5d42a6ee6 -t 2000 00:32:40.371 [ 00:32:40.371 { 00:32:40.371 "name": "da4681e3-4295-4b5b-bad5-40d5d42a6ee6", 00:32:40.371 "aliases": [ 00:32:40.371 "lvs/lvol" 00:32:40.371 ], 00:32:40.371 "product_name": "Logical Volume", 00:32:40.371 "block_size": 4096, 00:32:40.371 "num_blocks": 38912, 00:32:40.371 "uuid": "da4681e3-4295-4b5b-bad5-40d5d42a6ee6", 00:32:40.371 "assigned_rate_limits": { 00:32:40.371 "rw_ios_per_sec": 0, 00:32:40.371 "rw_mbytes_per_sec": 0, 00:32:40.371 "r_mbytes_per_sec": 0, 00:32:40.371 "w_mbytes_per_sec": 0 00:32:40.371 }, 00:32:40.371 "claimed": false, 00:32:40.371 "zoned": false, 00:32:40.371 "supported_io_types": { 00:32:40.371 "read": true, 00:32:40.371 "write": true, 00:32:40.371 "unmap": true, 00:32:40.371 "flush": false, 00:32:40.371 "reset": true, 00:32:40.371 "nvme_admin": false, 00:32:40.371 "nvme_io": false, 00:32:40.371 "nvme_io_md": false, 00:32:40.371 "write_zeroes": true, 00:32:40.371 "zcopy": false, 00:32:40.371 "get_zone_info": false, 00:32:40.371 "zone_management": false, 00:32:40.371 "zone_append": false, 00:32:40.371 "compare": false, 00:32:40.371 "compare_and_write": false, 00:32:40.371 "abort": false, 00:32:40.371 "seek_hole": true, 00:32:40.371 "seek_data": true, 00:32:40.371 "copy": false, 00:32:40.371 "nvme_iov_md": false 00:32:40.371 }, 00:32:40.371 "driver_specific": { 00:32:40.371 "lvol": { 00:32:40.371 "lvol_store_uuid": "b0b45304-4675-4210-9210-262a328b73b6", 00:32:40.371 "base_bdev": "aio_bdev", 00:32:40.371 "thin_provision": false, 00:32:40.371 "num_allocated_clusters": 38, 00:32:40.371 "snapshot": false, 00:32:40.371 "clone": false, 00:32:40.371 "esnap_clone": false 00:32:40.371 } 00:32:40.371 } 00:32:40.371 } 00:32:40.371 ] 00:32:40.632 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:40.632 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0b45304-4675-4210-9210-262a328b73b6 00:32:40.632 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:40.632 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:40.632 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0b45304-4675-4210-9210-262a328b73b6 00:32:40.632 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:40.891 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:40.891 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete da4681e3-4295-4b5b-bad5-40d5d42a6ee6 00:32:40.891 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b0b45304-4675-4210-9210-262a328b73b6 00:32:41.152 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:41.411 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:41.411 00:32:41.411 real 0m15.842s 00:32:41.411 user 0m15.417s 00:32:41.411 sys 0m1.500s 00:32:41.411 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.411 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.411 ************************************ 00:32:41.411 END TEST lvs_grow_clean 00:32:41.412 ************************************ 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:41.412 ************************************ 00:32:41.412 START TEST lvs_grow_dirty 00:32:41.412 ************************************ 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:41.412 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:41.671 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:41.671 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:41.931 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:41.931 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:41.931 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:41.931 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:41.931 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:41.931 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 lvol 150 00:32:42.192 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=44f25cef-7b90-4573-85fc-3e1277ad466d 00:32:42.192 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:42.192 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:42.451 [2024-12-06 11:31:48.383531] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:42.451 [2024-12-06 11:31:48.383678] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:42.451 true 00:32:42.452 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:42.452 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:42.452 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:42.452 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:42.711 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 44f25cef-7b90-4573-85fc-3e1277ad466d 00:32:42.970 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:42.970 [2024-12-06 11:31:49.079680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:42.970 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:43.230 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3676539 00:32:43.230 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:43.230 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:43.230 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3676539 /var/tmp/bdevperf.sock 00:32:43.230 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3676539 ']' 00:32:43.230 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:43.230 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.230 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:43.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:43.230 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.230 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:43.230 [2024-12-06 11:31:49.320217] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:32:43.230 [2024-12-06 11:31:49.320274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3676539 ] 00:32:43.490 [2024-12-06 11:31:49.411418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.491 [2024-12-06 11:31:49.441196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.060 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.060 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:44.060 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:44.321 Nvme0n1 00:32:44.321 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:44.581 [ 00:32:44.581 { 00:32:44.581 "name": "Nvme0n1", 00:32:44.581 "aliases": [ 00:32:44.581 "44f25cef-7b90-4573-85fc-3e1277ad466d" 00:32:44.581 ], 00:32:44.581 "product_name": "NVMe disk", 00:32:44.581 "block_size": 4096, 00:32:44.581 "num_blocks": 38912, 00:32:44.581 "uuid": "44f25cef-7b90-4573-85fc-3e1277ad466d", 00:32:44.581 "numa_id": 0, 00:32:44.581 "assigned_rate_limits": { 00:32:44.581 "rw_ios_per_sec": 0, 00:32:44.581 "rw_mbytes_per_sec": 0, 00:32:44.581 "r_mbytes_per_sec": 0, 00:32:44.581 "w_mbytes_per_sec": 0 00:32:44.581 }, 00:32:44.581 "claimed": false, 00:32:44.581 "zoned": false, 00:32:44.581 "supported_io_types": { 00:32:44.581 "read": true, 00:32:44.581 "write": true, 00:32:44.581 "unmap": true, 00:32:44.581 "flush": true, 00:32:44.581 "reset": true, 00:32:44.581 "nvme_admin": true, 00:32:44.581 "nvme_io": true, 00:32:44.581 "nvme_io_md": false, 00:32:44.581 "write_zeroes": true, 00:32:44.581 "zcopy": false, 00:32:44.581 "get_zone_info": false, 00:32:44.581 "zone_management": false, 00:32:44.581 "zone_append": false, 00:32:44.581 "compare": true, 00:32:44.581 "compare_and_write": true, 00:32:44.581 "abort": true, 00:32:44.581 "seek_hole": false, 00:32:44.581 "seek_data": false, 00:32:44.581 "copy": true, 00:32:44.581 "nvme_iov_md": false 00:32:44.581 }, 00:32:44.581 "memory_domains": [ 00:32:44.581 { 00:32:44.581 "dma_device_id": "system", 00:32:44.581 "dma_device_type": 1 00:32:44.581 } 00:32:44.581 ], 00:32:44.581 "driver_specific": { 00:32:44.581 "nvme": [ 00:32:44.581 { 00:32:44.581 "trid": { 00:32:44.581 "trtype": "TCP", 00:32:44.581 "adrfam": "IPv4", 00:32:44.581 "traddr": "10.0.0.2", 00:32:44.581 "trsvcid": "4420", 00:32:44.581 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:44.581 }, 00:32:44.581 "ctrlr_data": { 00:32:44.581 "cntlid": 1, 00:32:44.581 "vendor_id": "0x8086", 00:32:44.581 "model_number": "SPDK bdev Controller", 00:32:44.581 "serial_number": "SPDK0", 00:32:44.581 "firmware_revision": "25.01", 00:32:44.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:44.581 "oacs": { 00:32:44.581 "security": 0, 00:32:44.581 "format": 0, 00:32:44.581 "firmware": 0, 00:32:44.581 "ns_manage": 0 00:32:44.581 }, 00:32:44.581 "multi_ctrlr": true, 00:32:44.581 "ana_reporting": false 00:32:44.581 }, 00:32:44.581 "vs": { 00:32:44.581 "nvme_version": "1.3" 00:32:44.581 }, 00:32:44.581 "ns_data": { 00:32:44.581 "id": 1, 00:32:44.581 "can_share": true 00:32:44.581 } 00:32:44.581 } 00:32:44.581 ], 00:32:44.581 "mp_policy": "active_passive" 00:32:44.581 } 00:32:44.581 } 00:32:44.581 ] 00:32:44.581 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:44.581 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3676693 00:32:44.581 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:44.581 Running I/O for 10 seconds... 00:32:45.963 Latency(us) 00:32:45.963 [2024-12-06T10:31:52.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.963 Nvme0n1 : 1.00 17429.00 68.08 0.00 0.00 0.00 0.00 0.00 00:32:45.963 [2024-12-06T10:31:52.130Z] =================================================================================================================== 00:32:45.963 [2024-12-06T10:31:52.130Z] Total : 17429.00 68.08 0.00 0.00 0.00 0.00 0.00 00:32:45.963 00:32:46.534 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:46.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:46.794 Nvme0n1 : 2.00 17498.50 68.35 0.00 0.00 0.00 0.00 0.00 00:32:46.794 [2024-12-06T10:31:52.961Z] =================================================================================================================== 00:32:46.794 [2024-12-06T10:31:52.961Z] Total : 17498.50 68.35 0.00 0.00 0.00 0.00 0.00 00:32:46.794 00:32:46.794 true 00:32:46.794 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:46.794 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:47.055 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:47.055 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:47.055 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3676693 00:32:47.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.626 Nvme0n1 : 3.00 17527.00 68.46 0.00 0.00 0.00 0.00 0.00 00:32:47.626 [2024-12-06T10:31:53.793Z] =================================================================================================================== 00:32:47.626 [2024-12-06T10:31:53.793Z] Total : 17527.00 68.46 0.00 0.00 0.00 0.00 0.00 00:32:47.626 00:32:48.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:48.566 Nvme0n1 : 4.00 17557.25 68.58 0.00 0.00 0.00 0.00 0.00 00:32:48.566 [2024-12-06T10:31:54.733Z] =================================================================================================================== 00:32:48.566 [2024-12-06T10:31:54.733Z] Total : 17557.25 68.58 0.00 0.00 0.00 0.00 0.00 00:32:48.566 00:32:49.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:49.959 Nvme0n1 : 5.00 17585.00 68.69 0.00 0.00 0.00 0.00 0.00 00:32:49.959 [2024-12-06T10:31:56.126Z] =================================================================================================================== 00:32:49.959 [2024-12-06T10:31:56.126Z] Total : 17585.00 68.69 0.00 0.00 0.00 0.00 0.00 00:32:49.959 00:32:50.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:50.899 Nvme0n1 : 6.00 17603.50 68.76 0.00 0.00 0.00 0.00 0.00 00:32:50.899 [2024-12-06T10:31:57.066Z] =================================================================================================================== 00:32:50.899 [2024-12-06T10:31:57.066Z] Total : 17603.50 68.76 0.00 0.00 0.00 0.00 0.00 00:32:50.899 00:32:51.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.840 Nvme0n1 : 7.00 17623.57 68.84 0.00 0.00 0.00 0.00 0.00 00:32:51.840 [2024-12-06T10:31:58.007Z] =================================================================================================================== 00:32:51.840 [2024-12-06T10:31:58.007Z] Total : 17623.57 68.84 0.00 0.00 0.00 0.00 0.00 00:32:51.840 00:32:52.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.782 Nvme0n1 : 8.00 17636.62 68.89 0.00 0.00 0.00 0.00 0.00 00:32:52.782 [2024-12-06T10:31:58.949Z] =================================================================================================================== 00:32:52.782 [2024-12-06T10:31:58.949Z] Total : 17636.62 68.89 0.00 0.00 0.00 0.00 0.00 00:32:52.782 00:32:53.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.726 Nvme0n1 : 9.00 17648.56 68.94 0.00 0.00 0.00 0.00 0.00 00:32:53.726 [2024-12-06T10:31:59.893Z] =================================================================================================================== 00:32:53.726 [2024-12-06T10:31:59.893Z] Total : 17648.56 68.94 0.00 0.00 0.00 0.00 0.00 00:32:53.726 00:32:54.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:54.671 Nvme0n1 : 10.00 17653.30 68.96 0.00 0.00 0.00 0.00 0.00 00:32:54.671 [2024-12-06T10:32:00.838Z] =================================================================================================================== 00:32:54.671 [2024-12-06T10:32:00.838Z] Total : 17653.30 68.96 0.00 0.00 0.00 0.00 0.00 00:32:54.671 00:32:54.671 00:32:54.671 Latency(us) 00:32:54.671 [2024-12-06T10:32:00.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:54.671 Nvme0n1 : 10.01 17653.99 68.96 0.00 0.00 7248.33 1706.67 9120.43 00:32:54.671 [2024-12-06T10:32:00.838Z] =================================================================================================================== 00:32:54.671 [2024-12-06T10:32:00.838Z] Total : 17653.99 68.96 0.00 0.00 7248.33 1706.67 9120.43 00:32:54.671 { 00:32:54.671 "results": [ 00:32:54.671 { 00:32:54.671 "job": "Nvme0n1", 00:32:54.671 "core_mask": "0x2", 00:32:54.671 "workload": "randwrite", 00:32:54.671 "status": "finished", 00:32:54.671 "queue_depth": 128, 00:32:54.671 "io_size": 4096, 00:32:54.671 "runtime": 10.006858, 00:32:54.671 "iops": 17653.99289167489, 00:32:54.671 "mibps": 68.96090973310504, 00:32:54.671 "io_failed": 0, 00:32:54.671 "io_timeout": 0, 00:32:54.671 "avg_latency_us": 7248.333425336285, 00:32:54.671 "min_latency_us": 1706.6666666666667, 00:32:54.671 "max_latency_us": 9120.426666666666 00:32:54.671 } 00:32:54.671 ], 00:32:54.671 "core_count": 1 00:32:54.671 } 00:32:54.671 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3676539 00:32:54.671 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3676539 ']' 00:32:54.671 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3676539 00:32:54.671 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:54.671 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.671 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3676539 00:32:54.930 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:54.930 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:54.930 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3676539' 00:32:54.930 killing process with pid 3676539 00:32:54.930 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3676539 00:32:54.930 Received shutdown signal, test time was about 10.000000 seconds 00:32:54.930 00:32:54.930 Latency(us) 00:32:54.930 [2024-12-06T10:32:01.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.930 [2024-12-06T10:32:01.097Z] =================================================================================================================== 00:32:54.930 [2024-12-06T10:32:01.097Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:54.930 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3676539 00:32:54.930 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:55.189 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:55.189 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:55.189 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3672869 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3672869 00:32:55.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3672869 Killed "${NVMF_APP[@]}" "$@" 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3678736 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3678736 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3678736 ']' 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.450 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:55.450 [2024-12-06 11:32:01.588565] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:55.450 [2024-12-06 11:32:01.589806] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:32:55.450 [2024-12-06 11:32:01.589860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.709 [2024-12-06 11:32:01.677805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.709 [2024-12-06 11:32:01.713638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.709 [2024-12-06 11:32:01.713673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.709 [2024-12-06 11:32:01.713681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.709 [2024-12-06 11:32:01.713688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.709 [2024-12-06 11:32:01.713694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.709 [2024-12-06 11:32:01.714281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.709 [2024-12-06 11:32:01.769898] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:55.709 [2024-12-06 11:32:01.770144] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:56.277 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.277 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:56.277 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:56.277 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:56.277 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:56.277 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.277 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:56.536 [2024-12-06 11:32:02.560873] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:56.536 [2024-12-06 11:32:02.560972] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:56.536 [2024-12-06 11:32:02.561004] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:56.536 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:56.536 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 44f25cef-7b90-4573-85fc-3e1277ad466d 00:32:56.536 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=44f25cef-7b90-4573-85fc-3e1277ad466d 00:32:56.536 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:56.536 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:56.536 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:56.536 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:56.536 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:56.797 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 44f25cef-7b90-4573-85fc-3e1277ad466d -t 2000 00:32:56.797 [ 00:32:56.797 { 00:32:56.797 "name": "44f25cef-7b90-4573-85fc-3e1277ad466d", 00:32:56.797 "aliases": [ 00:32:56.797 "lvs/lvol" 00:32:56.797 ], 00:32:56.797 "product_name": "Logical Volume", 00:32:56.797 "block_size": 4096, 00:32:56.797 "num_blocks": 38912, 00:32:56.797 "uuid": "44f25cef-7b90-4573-85fc-3e1277ad466d", 00:32:56.797 "assigned_rate_limits": { 00:32:56.797 "rw_ios_per_sec": 0, 00:32:56.797 "rw_mbytes_per_sec": 0, 00:32:56.797 "r_mbytes_per_sec": 0, 00:32:56.797 "w_mbytes_per_sec": 0 00:32:56.797 }, 00:32:56.797 "claimed": false, 00:32:56.797 "zoned": false, 00:32:56.797 "supported_io_types": { 00:32:56.797 "read": true, 00:32:56.797 "write": true, 00:32:56.797 "unmap": true, 00:32:56.797 "flush": false, 00:32:56.797 "reset": true, 00:32:56.797 "nvme_admin": false, 00:32:56.797 "nvme_io": false, 00:32:56.797 "nvme_io_md": false, 00:32:56.797 "write_zeroes": true, 00:32:56.797 "zcopy": false, 00:32:56.797 "get_zone_info": false, 00:32:56.797 "zone_management": false, 00:32:56.797 "zone_append": false, 00:32:56.797 "compare": false, 00:32:56.797 "compare_and_write": false, 00:32:56.797 "abort": false, 00:32:56.797 "seek_hole": true, 00:32:56.797 "seek_data": true, 00:32:56.797 "copy": false, 00:32:56.797 "nvme_iov_md": false 00:32:56.797 }, 00:32:56.797 "driver_specific": { 00:32:56.797 "lvol": { 00:32:56.797 "lvol_store_uuid": "a506f3bc-79a6-4a3a-8899-8db920bc3b73", 00:32:56.797 "base_bdev": "aio_bdev", 00:32:56.797 "thin_provision": false, 00:32:56.797 "num_allocated_clusters": 38, 00:32:56.797 "snapshot": false, 00:32:56.797 "clone": false, 00:32:56.797 "esnap_clone": false 00:32:56.797 } 00:32:56.797 } 00:32:56.797 } 00:32:56.797 ] 00:32:56.797 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:56.797 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:56.797 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:57.057 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:57.057 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:57.057 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:57.057 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:57.057 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:57.316 [2024-12-06 11:32:03.366659] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:57.316 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:57.575 request: 00:32:57.576 { 00:32:57.576 "uuid": "a506f3bc-79a6-4a3a-8899-8db920bc3b73", 00:32:57.576 "method": "bdev_lvol_get_lvstores", 00:32:57.576 "req_id": 1 00:32:57.576 } 00:32:57.576 Got JSON-RPC error response 00:32:57.576 response: 00:32:57.576 { 00:32:57.576 "code": -19, 00:32:57.576 "message": "No such device" 00:32:57.576 } 00:32:57.576 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:57.576 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:57.576 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:57.576 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:57.576 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:57.835 aio_bdev 00:32:57.835 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 44f25cef-7b90-4573-85fc-3e1277ad466d 00:32:57.835 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=44f25cef-7b90-4573-85fc-3e1277ad466d 00:32:57.835 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:57.835 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:57.835 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:57.835 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:57.835 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:57.835 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 44f25cef-7b90-4573-85fc-3e1277ad466d -t 2000 00:32:58.096 [ 00:32:58.096 { 00:32:58.096 "name": "44f25cef-7b90-4573-85fc-3e1277ad466d", 00:32:58.096 "aliases": [ 00:32:58.096 "lvs/lvol" 00:32:58.096 ], 00:32:58.096 "product_name": "Logical Volume", 00:32:58.096 "block_size": 4096, 00:32:58.096 "num_blocks": 38912, 00:32:58.096 "uuid": "44f25cef-7b90-4573-85fc-3e1277ad466d", 00:32:58.096 "assigned_rate_limits": { 00:32:58.096 "rw_ios_per_sec": 0, 00:32:58.096 "rw_mbytes_per_sec": 0, 00:32:58.096 "r_mbytes_per_sec": 0, 00:32:58.096 "w_mbytes_per_sec": 0 00:32:58.096 }, 00:32:58.096 "claimed": false, 00:32:58.096 "zoned": false, 00:32:58.096 "supported_io_types": { 00:32:58.096 "read": true, 00:32:58.096 "write": true, 00:32:58.096 "unmap": true, 00:32:58.096 "flush": false, 00:32:58.096 "reset": true, 00:32:58.096 "nvme_admin": false, 00:32:58.096 "nvme_io": false, 00:32:58.096 "nvme_io_md": false, 00:32:58.096 "write_zeroes": true, 00:32:58.096 "zcopy": false, 00:32:58.096 "get_zone_info": false, 00:32:58.096 "zone_management": false, 00:32:58.096 "zone_append": false, 00:32:58.096 "compare": false, 00:32:58.096 "compare_and_write": false, 00:32:58.096 "abort": false, 00:32:58.096 "seek_hole": true, 00:32:58.096 "seek_data": true, 00:32:58.096 "copy": false, 00:32:58.096 "nvme_iov_md": false 00:32:58.096 }, 00:32:58.096 "driver_specific": { 00:32:58.096 "lvol": { 00:32:58.096 "lvol_store_uuid": "a506f3bc-79a6-4a3a-8899-8db920bc3b73", 00:32:58.096 "base_bdev": "aio_bdev", 00:32:58.096 "thin_provision": false, 00:32:58.096 "num_allocated_clusters": 38, 00:32:58.096 "snapshot": false, 00:32:58.096 "clone": false, 00:32:58.096 "esnap_clone": false 00:32:58.096 } 00:32:58.096 } 00:32:58.096 } 00:32:58.096 ] 00:32:58.096 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:58.096 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:58.096 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:58.356 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:58.356 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:58.356 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:58.356 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:58.356 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 44f25cef-7b90-4573-85fc-3e1277ad466d 00:32:58.616 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a506f3bc-79a6-4a3a-8899-8db920bc3b73 00:32:58.616 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:58.876 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:58.876 00:32:58.876 real 0m17.470s 00:32:58.876 user 0m35.033s 00:32:58.876 sys 0m3.097s 00:32:58.876 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:58.876 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:58.877 ************************************ 00:32:58.877 END TEST lvs_grow_dirty 00:32:58.877 ************************************ 00:32:58.877 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:58.877 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:58.877 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:58.877 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:58.877 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:58.877 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:58.877 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:58.877 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:58.877 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:58.877 nvmf_trace.0 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.141 rmmod nvme_tcp 00:32:59.141 rmmod nvme_fabrics 00:32:59.141 rmmod nvme_keyring 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3678736 ']' 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3678736 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3678736 ']' 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3678736 00:32:59.141 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:59.142 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.142 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3678736 00:32:59.142 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:59.142 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:59.142 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3678736' 00:32:59.142 killing process with pid 3678736 00:32:59.142 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3678736 00:32:59.142 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3678736 00:32:59.401 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:59.401 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:59.401 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:59.402 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:59.402 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:59.402 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:59.402 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:59.402 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.402 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:59.402 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.402 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.402 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.309 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:01.309 00:33:01.309 real 0m45.566s 00:33:01.309 user 0m53.625s 00:33:01.309 sys 0m11.357s 00:33:01.309 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.309 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:01.309 ************************************ 00:33:01.309 END TEST nvmf_lvs_grow 00:33:01.309 ************************************ 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:01.569 ************************************ 00:33:01.569 START TEST nvmf_bdev_io_wait 00:33:01.569 ************************************ 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:01.569 * Looking for test storage... 00:33:01.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.569 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:01.570 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:01.830 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.830 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:01.830 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.830 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.830 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:01.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.831 --rc genhtml_branch_coverage=1 00:33:01.831 --rc genhtml_function_coverage=1 00:33:01.831 --rc genhtml_legend=1 00:33:01.831 --rc geninfo_all_blocks=1 00:33:01.831 --rc geninfo_unexecuted_blocks=1 00:33:01.831 00:33:01.831 ' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:01.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.831 --rc genhtml_branch_coverage=1 00:33:01.831 --rc genhtml_function_coverage=1 00:33:01.831 --rc genhtml_legend=1 00:33:01.831 --rc geninfo_all_blocks=1 00:33:01.831 --rc geninfo_unexecuted_blocks=1 00:33:01.831 00:33:01.831 ' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:01.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.831 --rc genhtml_branch_coverage=1 00:33:01.831 --rc genhtml_function_coverage=1 00:33:01.831 --rc genhtml_legend=1 00:33:01.831 --rc geninfo_all_blocks=1 00:33:01.831 --rc geninfo_unexecuted_blocks=1 00:33:01.831 00:33:01.831 ' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:01.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.831 --rc genhtml_branch_coverage=1 00:33:01.831 --rc genhtml_function_coverage=1 00:33:01.831 --rc genhtml_legend=1 00:33:01.831 --rc geninfo_all_blocks=1 00:33:01.831 --rc geninfo_unexecuted_blocks=1 00:33:01.831 00:33:01.831 ' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:01.831 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:01.832 11:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:09.965 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:09.965 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:09.965 Found net devices under 0000:31:00.0: cvl_0_0 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:09.965 Found net devices under 0000:31:00.1: cvl_0_1 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:09.965 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:09.966 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.966 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.966 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.966 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:10.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:33:10.226 00:33:10.226 --- 10.0.0.2 ping statistics --- 00:33:10.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.226 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:10.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:33:10.226 00:33:10.226 --- 10.0.0.1 ping statistics --- 00:33:10.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.226 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3684299 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3684299 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3684299 ']' 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.226 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:10.226 [2024-12-06 11:32:16.380596] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:10.226 [2024-12-06 11:32:16.381773] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:33:10.226 [2024-12-06 11:32:16.381829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.486 [2024-12-06 11:32:16.474923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:10.486 [2024-12-06 11:32:16.517805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.486 [2024-12-06 11:32:16.517845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.486 [2024-12-06 11:32:16.517853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.486 [2024-12-06 11:32:16.517860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.486 [2024-12-06 11:32:16.517872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.486 [2024-12-06 11:32:16.519486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.486 [2024-12-06 11:32:16.519604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:10.486 [2024-12-06 11:32:16.519761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.486 [2024-12-06 11:32:16.519762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:10.486 [2024-12-06 11:32:16.520033] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:11.057 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.057 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:11.057 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:11.057 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:11.057 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:11.057 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.057 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:11.318 [2024-12-06 11:32:17.273600] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:11.318 [2024-12-06 11:32:17.273830] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:11.318 [2024-12-06 11:32:17.274636] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:11.318 [2024-12-06 11:32:17.274679] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:11.318 [2024-12-06 11:32:17.284505] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:11.318 Malloc0 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.318 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:11.319 [2024-12-06 11:32:17.348384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3684450 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3684452 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:11.319 { 00:33:11.319 "params": { 00:33:11.319 "name": "Nvme$subsystem", 00:33:11.319 "trtype": "$TEST_TRANSPORT", 00:33:11.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.319 "adrfam": "ipv4", 00:33:11.319 "trsvcid": "$NVMF_PORT", 00:33:11.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.319 "hdgst": ${hdgst:-false}, 00:33:11.319 "ddgst": ${ddgst:-false} 00:33:11.319 }, 00:33:11.319 "method": "bdev_nvme_attach_controller" 00:33:11.319 } 00:33:11.319 EOF 00:33:11.319 )") 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3684454 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:11.319 { 00:33:11.319 "params": { 00:33:11.319 "name": "Nvme$subsystem", 00:33:11.319 "trtype": "$TEST_TRANSPORT", 00:33:11.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.319 "adrfam": "ipv4", 00:33:11.319 "trsvcid": "$NVMF_PORT", 00:33:11.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.319 "hdgst": ${hdgst:-false}, 00:33:11.319 "ddgst": ${ddgst:-false} 00:33:11.319 }, 00:33:11.319 "method": "bdev_nvme_attach_controller" 00:33:11.319 } 00:33:11.319 EOF 00:33:11.319 )") 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3684457 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:11.319 { 00:33:11.319 "params": { 00:33:11.319 "name": "Nvme$subsystem", 00:33:11.319 "trtype": "$TEST_TRANSPORT", 00:33:11.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.319 "adrfam": "ipv4", 00:33:11.319 "trsvcid": "$NVMF_PORT", 00:33:11.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.319 "hdgst": ${hdgst:-false}, 00:33:11.319 "ddgst": ${ddgst:-false} 00:33:11.319 }, 00:33:11.319 "method": "bdev_nvme_attach_controller" 00:33:11.319 } 00:33:11.319 EOF 00:33:11.319 )") 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:11.319 { 00:33:11.319 "params": { 00:33:11.319 "name": "Nvme$subsystem", 00:33:11.319 "trtype": "$TEST_TRANSPORT", 00:33:11.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.319 "adrfam": "ipv4", 00:33:11.319 "trsvcid": "$NVMF_PORT", 00:33:11.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.319 "hdgst": ${hdgst:-false}, 00:33:11.319 "ddgst": ${ddgst:-false} 00:33:11.319 }, 00:33:11.319 "method": "bdev_nvme_attach_controller" 00:33:11.319 } 00:33:11.319 EOF 00:33:11.319 )") 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3684450 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:11.319 "params": { 00:33:11.319 "name": "Nvme1", 00:33:11.319 "trtype": "tcp", 00:33:11.319 "traddr": "10.0.0.2", 00:33:11.319 "adrfam": "ipv4", 00:33:11.319 "trsvcid": "4420", 00:33:11.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:11.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:11.319 "hdgst": false, 00:33:11.319 "ddgst": false 00:33:11.319 }, 00:33:11.319 "method": "bdev_nvme_attach_controller" 00:33:11.319 }' 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:11.319 "params": { 00:33:11.319 "name": "Nvme1", 00:33:11.319 "trtype": "tcp", 00:33:11.319 "traddr": "10.0.0.2", 00:33:11.319 "adrfam": "ipv4", 00:33:11.319 "trsvcid": "4420", 00:33:11.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:11.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:11.319 "hdgst": false, 00:33:11.319 "ddgst": false 00:33:11.319 }, 00:33:11.319 "method": "bdev_nvme_attach_controller" 00:33:11.319 }' 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:11.319 "params": { 00:33:11.319 "name": "Nvme1", 00:33:11.319 "trtype": "tcp", 00:33:11.319 "traddr": "10.0.0.2", 00:33:11.319 "adrfam": "ipv4", 00:33:11.319 "trsvcid": "4420", 00:33:11.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:11.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:11.319 "hdgst": false, 00:33:11.319 "ddgst": false 00:33:11.319 }, 00:33:11.319 "method": "bdev_nvme_attach_controller" 00:33:11.319 }' 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:11.319 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:11.319 "params": { 00:33:11.319 "name": "Nvme1", 00:33:11.319 "trtype": "tcp", 00:33:11.319 "traddr": "10.0.0.2", 00:33:11.319 "adrfam": "ipv4", 00:33:11.319 "trsvcid": "4420", 00:33:11.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:11.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:11.319 "hdgst": false, 00:33:11.319 "ddgst": false 00:33:11.319 }, 00:33:11.319 "method": "bdev_nvme_attach_controller" 00:33:11.319 }' 00:33:11.319 [2024-12-06 11:32:17.404625] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:33:11.319 [2024-12-06 11:32:17.404667] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:33:11.319 [2024-12-06 11:32:17.404681] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:11.320 [2024-12-06 11:32:17.404714] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:11.320 [2024-12-06 11:32:17.405141] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:33:11.320 [2024-12-06 11:32:17.405185] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:11.320 [2024-12-06 11:32:17.407032] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:33:11.320 [2024-12-06 11:32:17.407081] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:11.580 [2024-12-06 11:32:17.576956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.580 [2024-12-06 11:32:17.605614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:11.580 [2024-12-06 11:32:17.634436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.580 [2024-12-06 11:32:17.664080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:11.580 [2024-12-06 11:32:17.693803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.580 [2024-12-06 11:32:17.723413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:11.580 [2024-12-06 11:32:17.743035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.841 [2024-12-06 11:32:17.771402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:11.841 Running I/O for 1 seconds... 00:33:11.841 Running I/O for 1 seconds... 00:33:11.841 Running I/O for 1 seconds... 00:33:11.841 Running I/O for 1 seconds... 00:33:12.780 14924.00 IOPS, 58.30 MiB/s 00:33:12.780 Latency(us) 00:33:12.780 [2024-12-06T10:32:18.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.780 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:12.780 Nvme1n1 : 1.01 14979.14 58.51 0.00 0.00 8517.64 2116.27 12124.16 00:33:12.780 [2024-12-06T10:32:18.947Z] =================================================================================================================== 00:33:12.780 [2024-12-06T10:32:18.947Z] Total : 14979.14 58.51 0.00 0.00 8517.64 2116.27 12124.16 00:33:12.780 7440.00 IOPS, 29.06 MiB/s 00:33:12.780 Latency(us) 00:33:12.780 [2024-12-06T10:32:18.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.780 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:12.780 Nvme1n1 : 1.02 7471.62 29.19 0.00 0.00 16997.03 2430.29 23046.83 00:33:12.780 [2024-12-06T10:32:18.947Z] =================================================================================================================== 00:33:12.780 [2024-12-06T10:32:18.947Z] Total : 7471.62 29.19 0.00 0.00 16997.03 2430.29 23046.83 00:33:12.780 176480.00 IOPS, 689.38 MiB/s 00:33:12.780 Latency(us) 00:33:12.780 [2024-12-06T10:32:18.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.780 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:12.780 Nvme1n1 : 1.00 176109.36 687.93 0.00 0.00 722.63 308.91 2116.27 00:33:12.780 [2024-12-06T10:32:18.947Z] =================================================================================================================== 00:33:12.780 [2024-12-06T10:32:18.947Z] Total : 176109.36 687.93 0.00 0.00 722.63 308.91 2116.27 00:33:12.780 7321.00 IOPS, 28.60 MiB/s [2024-12-06T10:32:18.947Z] 11:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3684452 00:33:12.780 00:33:12.780 Latency(us) 00:33:12.780 [2024-12-06T10:32:18.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.780 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:12.780 Nvme1n1 : 1.01 7409.93 28.95 0.00 0.00 17223.06 4123.31 31238.83 00:33:12.780 [2024-12-06T10:32:18.947Z] =================================================================================================================== 00:33:12.780 [2024-12-06T10:32:18.947Z] Total : 7409.93 28.95 0.00 0.00 17223.06 4123.31 31238.83 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3684454 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3684457 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:13.041 rmmod nvme_tcp 00:33:13.041 rmmod nvme_fabrics 00:33:13.041 rmmod nvme_keyring 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3684299 ']' 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3684299 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3684299 ']' 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3684299 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3684299 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3684299' 00:33:13.041 killing process with pid 3684299 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3684299 00:33:13.041 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3684299 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:13.302 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.362 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:15.362 00:33:15.362 real 0m13.844s 00:33:15.362 user 0m14.779s 00:33:15.362 sys 0m8.202s 00:33:15.362 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.362 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:15.362 ************************************ 00:33:15.362 END TEST nvmf_bdev_io_wait 00:33:15.362 ************************************ 00:33:15.362 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:15.362 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:15.362 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:15.362 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:15.362 ************************************ 00:33:15.362 START TEST nvmf_queue_depth 00:33:15.362 ************************************ 00:33:15.362 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:15.623 * Looking for test storage... 00:33:15.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:15.623 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:15.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.624 --rc genhtml_branch_coverage=1 00:33:15.624 --rc genhtml_function_coverage=1 00:33:15.624 --rc genhtml_legend=1 00:33:15.624 --rc geninfo_all_blocks=1 00:33:15.624 --rc geninfo_unexecuted_blocks=1 00:33:15.624 00:33:15.624 ' 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:15.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.624 --rc genhtml_branch_coverage=1 00:33:15.624 --rc genhtml_function_coverage=1 00:33:15.624 --rc genhtml_legend=1 00:33:15.624 --rc geninfo_all_blocks=1 00:33:15.624 --rc geninfo_unexecuted_blocks=1 00:33:15.624 00:33:15.624 ' 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:15.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.624 --rc genhtml_branch_coverage=1 00:33:15.624 --rc genhtml_function_coverage=1 00:33:15.624 --rc genhtml_legend=1 00:33:15.624 --rc geninfo_all_blocks=1 00:33:15.624 --rc geninfo_unexecuted_blocks=1 00:33:15.624 00:33:15.624 ' 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:15.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.624 --rc genhtml_branch_coverage=1 00:33:15.624 --rc genhtml_function_coverage=1 00:33:15.624 --rc genhtml_legend=1 00:33:15.624 --rc geninfo_all_blocks=1 00:33:15.624 --rc geninfo_unexecuted_blocks=1 00:33:15.624 00:33:15.624 ' 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:15.624 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:15.625 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:23.765 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:23.765 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:23.765 Found net devices under 0000:31:00.0: cvl_0_0 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:23.765 Found net devices under 0000:31:00.1: cvl_0_1 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.765 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:24.026 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:24.026 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:24.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:33:24.027 00:33:24.027 --- 10.0.0.2 ping statistics --- 00:33:24.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.027 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:24.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:33:24.027 00:33:24.027 --- 10.0.0.1 ping statistics --- 00:33:24.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.027 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3689495 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3689495 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3689495 ']' 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.027 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:24.027 [2024-12-06 11:32:30.173744] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:24.027 [2024-12-06 11:32:30.175172] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:33:24.027 [2024-12-06 11:32:30.175231] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.288 [2024-12-06 11:32:30.282025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.288 [2024-12-06 11:32:30.323686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.288 [2024-12-06 11:32:30.323734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.288 [2024-12-06 11:32:30.323742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.288 [2024-12-06 11:32:30.323749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.288 [2024-12-06 11:32:30.323756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.288 [2024-12-06 11:32:30.324470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.288 [2024-12-06 11:32:30.393847] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:24.288 [2024-12-06 11:32:30.394114] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:24.860 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.860 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:24.860 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:24.860 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.860 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:24.860 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.860 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:24.860 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.860 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:24.860 [2024-12-06 11:32:31.018514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.860 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.860 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:25.120 Malloc0 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:25.120 [2024-12-06 11:32:31.101498] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3689799 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3689799 /var/tmp/bdevperf.sock 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3689799 ']' 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:25.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:25.120 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:25.120 [2024-12-06 11:32:31.160399] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:33:25.120 [2024-12-06 11:32:31.160473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3689799 ] 00:33:25.120 [2024-12-06 11:32:31.246524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.381 [2024-12-06 11:32:31.288271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.951 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:25.951 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:25.951 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:25.951 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.951 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:25.951 NVMe0n1 00:33:25.951 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.951 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:25.951 Running I/O for 10 seconds... 00:33:28.273 8197.00 IOPS, 32.02 MiB/s [2024-12-06T10:32:35.379Z] 8704.00 IOPS, 34.00 MiB/s [2024-12-06T10:32:36.318Z] 8840.00 IOPS, 34.53 MiB/s [2024-12-06T10:32:37.260Z] 9444.00 IOPS, 36.89 MiB/s [2024-12-06T10:32:38.201Z] 9892.20 IOPS, 38.64 MiB/s [2024-12-06T10:32:39.144Z] 10242.33 IOPS, 40.01 MiB/s [2024-12-06T10:32:40.528Z] 10514.00 IOPS, 41.07 MiB/s [2024-12-06T10:32:41.470Z] 10663.75 IOPS, 41.66 MiB/s [2024-12-06T10:32:42.413Z] 10816.00 IOPS, 42.25 MiB/s [2024-12-06T10:32:42.413Z] 10960.00 IOPS, 42.81 MiB/s 00:33:36.246 Latency(us) 00:33:36.246 [2024-12-06T10:32:42.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.246 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:36.246 Verification LBA range: start 0x0 length 0x4000 00:33:36.246 NVMe0n1 : 10.06 10991.05 42.93 0.00 0.00 92834.72 24466.77 79080.11 00:33:36.246 [2024-12-06T10:32:42.413Z] =================================================================================================================== 00:33:36.246 [2024-12-06T10:32:42.414Z] Total : 10991.05 42.93 0.00 0.00 92834.72 24466.77 79080.11 00:33:36.247 { 00:33:36.247 "results": [ 00:33:36.247 { 00:33:36.247 "job": "NVMe0n1", 00:33:36.247 "core_mask": "0x1", 00:33:36.247 "workload": "verify", 00:33:36.247 "status": "finished", 00:33:36.247 "verify_range": { 00:33:36.247 "start": 0, 00:33:36.247 "length": 16384 00:33:36.247 }, 00:33:36.247 "queue_depth": 1024, 00:33:36.247 "io_size": 4096, 00:33:36.247 "runtime": 10.062735, 00:33:36.247 "iops": 10991.047662489373, 00:33:36.247 "mibps": 42.93377993159911, 00:33:36.247 "io_failed": 0, 00:33:36.247 "io_timeout": 0, 00:33:36.247 "avg_latency_us": 92834.71711826402, 00:33:36.247 "min_latency_us": 24466.773333333334, 00:33:36.247 "max_latency_us": 79080.10666666667 00:33:36.247 } 00:33:36.247 ], 00:33:36.247 "core_count": 1 00:33:36.247 } 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3689799 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3689799 ']' 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3689799 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3689799 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3689799' 00:33:36.247 killing process with pid 3689799 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3689799 00:33:36.247 Received shutdown signal, test time was about 10.000000 seconds 00:33:36.247 00:33:36.247 Latency(us) 00:33:36.247 [2024-12-06T10:32:42.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.247 [2024-12-06T10:32:42.414Z] =================================================================================================================== 00:33:36.247 [2024-12-06T10:32:42.414Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3689799 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:36.247 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:36.508 rmmod nvme_tcp 00:33:36.508 rmmod nvme_fabrics 00:33:36.508 rmmod nvme_keyring 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3689495 ']' 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3689495 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3689495 ']' 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3689495 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3689495 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3689495' 00:33:36.508 killing process with pid 3689495 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3689495 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3689495 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.508 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.769 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.769 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.769 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.769 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.769 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.683 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.683 00:33:38.683 real 0m23.287s 00:33:38.683 user 0m24.740s 00:33:38.683 sys 0m8.075s 00:33:38.683 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.683 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:38.683 ************************************ 00:33:38.683 END TEST nvmf_queue_depth 00:33:38.683 ************************************ 00:33:38.683 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:38.683 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:38.683 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.683 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:38.683 ************************************ 00:33:38.683 START TEST nvmf_target_multipath 00:33:38.683 ************************************ 00:33:38.683 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:38.946 * Looking for test storage... 00:33:38.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:38.946 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:38.946 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:33:38.946 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:38.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.946 --rc genhtml_branch_coverage=1 00:33:38.946 --rc genhtml_function_coverage=1 00:33:38.946 --rc genhtml_legend=1 00:33:38.946 --rc geninfo_all_blocks=1 00:33:38.946 --rc geninfo_unexecuted_blocks=1 00:33:38.946 00:33:38.946 ' 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:38.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.946 --rc genhtml_branch_coverage=1 00:33:38.946 --rc genhtml_function_coverage=1 00:33:38.946 --rc genhtml_legend=1 00:33:38.946 --rc geninfo_all_blocks=1 00:33:38.946 --rc geninfo_unexecuted_blocks=1 00:33:38.946 00:33:38.946 ' 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:38.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.946 --rc genhtml_branch_coverage=1 00:33:38.946 --rc genhtml_function_coverage=1 00:33:38.946 --rc genhtml_legend=1 00:33:38.946 --rc geninfo_all_blocks=1 00:33:38.946 --rc geninfo_unexecuted_blocks=1 00:33:38.946 00:33:38.946 ' 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:38.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.946 --rc genhtml_branch_coverage=1 00:33:38.946 --rc genhtml_function_coverage=1 00:33:38.946 --rc genhtml_legend=1 00:33:38.946 --rc geninfo_all_blocks=1 00:33:38.946 --rc geninfo_unexecuted_blocks=1 00:33:38.946 00:33:38.946 ' 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.946 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.947 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:47.084 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:47.085 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:47.085 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:47.085 Found net devices under 0000:31:00.0: cvl_0_0 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:47.085 Found net devices under 0000:31:00.1: cvl_0_1 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:47.085 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:47.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:33:47.347 00:33:47.347 --- 10.0.0.2 ping statistics --- 00:33:47.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.347 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:33:47.347 00:33:47.347 --- 10.0.0.1 ping statistics --- 00:33:47.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.347 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:47.347 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:47.348 only one NIC for nvmf test 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:47.348 rmmod nvme_tcp 00:33:47.348 rmmod nvme_fabrics 00:33:47.348 rmmod nvme_keyring 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.348 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:49.892 00:33:49.892 real 0m10.780s 00:33:49.892 user 0m2.393s 00:33:49.892 sys 0m6.346s 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:49.892 ************************************ 00:33:49.892 END TEST nvmf_target_multipath 00:33:49.892 ************************************ 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:49.892 ************************************ 00:33:49.892 START TEST nvmf_zcopy 00:33:49.892 ************************************ 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:49.892 * Looking for test storage... 00:33:49.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.892 --rc genhtml_branch_coverage=1 00:33:49.892 --rc genhtml_function_coverage=1 00:33:49.892 --rc genhtml_legend=1 00:33:49.892 --rc geninfo_all_blocks=1 00:33:49.892 --rc geninfo_unexecuted_blocks=1 00:33:49.892 00:33:49.892 ' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.892 --rc genhtml_branch_coverage=1 00:33:49.892 --rc genhtml_function_coverage=1 00:33:49.892 --rc genhtml_legend=1 00:33:49.892 --rc geninfo_all_blocks=1 00:33:49.892 --rc geninfo_unexecuted_blocks=1 00:33:49.892 00:33:49.892 ' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.892 --rc genhtml_branch_coverage=1 00:33:49.892 --rc genhtml_function_coverage=1 00:33:49.892 --rc genhtml_legend=1 00:33:49.892 --rc geninfo_all_blocks=1 00:33:49.892 --rc geninfo_unexecuted_blocks=1 00:33:49.892 00:33:49.892 ' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.892 --rc genhtml_branch_coverage=1 00:33:49.892 --rc genhtml_function_coverage=1 00:33:49.892 --rc genhtml_legend=1 00:33:49.892 --rc geninfo_all_blocks=1 00:33:49.892 --rc geninfo_unexecuted_blocks=1 00:33:49.892 00:33:49.892 ' 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:49.892 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:49.893 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:58.029 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:58.029 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:58.029 Found net devices under 0000:31:00.0: cvl_0_0 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:58.029 Found net devices under 0000:31:00.1: cvl_0_1 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:58.029 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:58.030 11:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:58.030 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:58.030 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:58.030 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.030 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:58.030 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:58.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.749 ms 00:33:58.291 00:33:58.291 --- 10.0.0.2 ping statistics --- 00:33:58.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.291 rtt min/avg/max/mdev = 0.749/0.749/0.749/0.000 ms 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:33:58.291 00:33:58.291 --- 10.0.0.1 ping statistics --- 00:33:58.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.291 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3701316 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3701316 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3701316 ']' 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:58.291 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:58.291 [2024-12-06 11:33:04.415189] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:58.291 [2024-12-06 11:33:04.416346] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:33:58.291 [2024-12-06 11:33:04.416395] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:58.552 [2024-12-06 11:33:04.526121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.552 [2024-12-06 11:33:04.575929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:58.552 [2024-12-06 11:33:04.575985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:58.552 [2024-12-06 11:33:04.575994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:58.552 [2024-12-06 11:33:04.576001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:58.552 [2024-12-06 11:33:04.576007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:58.552 [2024-12-06 11:33:04.576759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.552 [2024-12-06 11:33:04.654069] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:58.552 [2024-12-06 11:33:04.654346] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:59.123 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:59.123 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:59.123 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:59.123 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:59.123 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:59.123 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:59.123 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:59.123 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:59.123 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.123 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:59.383 [2024-12-06 11:33:05.289662] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:59.383 [2024-12-06 11:33:05.318002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:59.383 malloc0 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:59.383 { 00:33:59.383 "params": { 00:33:59.383 "name": "Nvme$subsystem", 00:33:59.383 "trtype": "$TEST_TRANSPORT", 00:33:59.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:59.383 "adrfam": "ipv4", 00:33:59.383 "trsvcid": "$NVMF_PORT", 00:33:59.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:59.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:59.383 "hdgst": ${hdgst:-false}, 00:33:59.383 "ddgst": ${ddgst:-false} 00:33:59.383 }, 00:33:59.383 "method": "bdev_nvme_attach_controller" 00:33:59.383 } 00:33:59.383 EOF 00:33:59.383 )") 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:59.383 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:59.383 "params": { 00:33:59.383 "name": "Nvme1", 00:33:59.383 "trtype": "tcp", 00:33:59.383 "traddr": "10.0.0.2", 00:33:59.383 "adrfam": "ipv4", 00:33:59.383 "trsvcid": "4420", 00:33:59.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:59.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:59.383 "hdgst": false, 00:33:59.383 "ddgst": false 00:33:59.383 }, 00:33:59.383 "method": "bdev_nvme_attach_controller" 00:33:59.383 }' 00:33:59.383 [2024-12-06 11:33:05.426536] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:33:59.383 [2024-12-06 11:33:05.426606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701491 ] 00:33:59.383 [2024-12-06 11:33:05.511411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.643 [2024-12-06 11:33:05.553389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.643 Running I/O for 10 seconds... 00:34:01.966 6713.00 IOPS, 52.45 MiB/s [2024-12-06T10:33:09.074Z] 6770.50 IOPS, 52.89 MiB/s [2024-12-06T10:33:10.014Z] 6785.67 IOPS, 53.01 MiB/s [2024-12-06T10:33:10.958Z] 6789.50 IOPS, 53.04 MiB/s [2024-12-06T10:33:11.898Z] 6929.20 IOPS, 54.13 MiB/s [2024-12-06T10:33:12.842Z] 7421.33 IOPS, 57.98 MiB/s [2024-12-06T10:33:13.784Z] 7772.14 IOPS, 60.72 MiB/s [2024-12-06T10:33:15.196Z] 8037.75 IOPS, 62.79 MiB/s [2024-12-06T10:33:15.769Z] 8245.22 IOPS, 64.42 MiB/s [2024-12-06T10:33:16.029Z] 8409.50 IOPS, 65.70 MiB/s 00:34:09.862 Latency(us) 00:34:09.862 [2024-12-06T10:33:16.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.862 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:09.862 Verification LBA range: start 0x0 length 0x1000 00:34:09.862 Nvme1n1 : 10.05 8377.98 65.45 0.00 0.00 15172.59 1884.16 43472.21 00:34:09.862 [2024-12-06T10:33:16.029Z] =================================================================================================================== 00:34:09.862 [2024-12-06T10:33:16.029Z] Total : 8377.98 65.45 0.00 0.00 15172.59 1884.16 43472.21 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3703851 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:09.862 { 00:34:09.862 "params": { 00:34:09.862 "name": "Nvme$subsystem", 00:34:09.862 "trtype": "$TEST_TRANSPORT", 00:34:09.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:09.862 "adrfam": "ipv4", 00:34:09.862 "trsvcid": "$NVMF_PORT", 00:34:09.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:09.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:09.862 "hdgst": ${hdgst:-false}, 00:34:09.862 "ddgst": ${ddgst:-false} 00:34:09.862 }, 00:34:09.862 "method": "bdev_nvme_attach_controller" 00:34:09.862 } 00:34:09.862 EOF 00:34:09.862 )") 00:34:09.862 [2024-12-06 11:33:15.933196] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.862 [2024-12-06 11:33:15.933227] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:09.862 11:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:09.862 "params": { 00:34:09.862 "name": "Nvme1", 00:34:09.862 "trtype": "tcp", 00:34:09.862 "traddr": "10.0.0.2", 00:34:09.862 "adrfam": "ipv4", 00:34:09.862 "trsvcid": "4420", 00:34:09.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:09.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:09.862 "hdgst": false, 00:34:09.862 "ddgst": false 00:34:09.862 }, 00:34:09.862 "method": "bdev_nvme_attach_controller" 00:34:09.862 }' 00:34:09.862 [2024-12-06 11:33:15.945160] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.862 [2024-12-06 11:33:15.945168] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.862 [2024-12-06 11:33:15.957158] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.862 [2024-12-06 11:33:15.957166] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.862 [2024-12-06 11:33:15.969157] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.862 [2024-12-06 11:33:15.969165] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.862 [2024-12-06 11:33:15.981157] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.862 [2024-12-06 11:33:15.981165] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.862 [2024-12-06 11:33:15.987568] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:34:09.862 [2024-12-06 11:33:15.987627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3703851 ] 00:34:09.862 [2024-12-06 11:33:15.993158] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.862 [2024-12-06 11:33:15.993166] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.862 [2024-12-06 11:33:16.005158] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.862 [2024-12-06 11:33:16.005165] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.862 [2024-12-06 11:33:16.017158] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.862 [2024-12-06 11:33:16.017165] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.029158] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.029166] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.041157] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.041165] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.053157] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.053165] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.064857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.123 [2024-12-06 11:33:16.065157] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.065169] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.077158] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.077166] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.089159] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.089167] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.100439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.123 [2024-12-06 11:33:16.101158] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.101166] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.113168] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.113179] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.125164] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.125178] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.137161] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.137174] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.149158] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.149168] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.161159] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.161166] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.173168] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.173184] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.185165] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.185177] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.197160] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.197170] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.209160] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.209169] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.221159] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.221169] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.233165] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.233180] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 Running I/O for 5 seconds... 00:34:10.123 [2024-12-06 11:33:16.250730] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.250747] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.264242] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.264258] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.123 [2024-12-06 11:33:16.278118] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.123 [2024-12-06 11:33:16.278133] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.292391] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.292412] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.305448] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.305463] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.320768] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.320785] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.333966] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.333981] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.348482] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.348497] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.361784] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.361798] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.376178] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.376194] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.388846] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.388868] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.401856] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.401876] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.416113] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.416128] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.429173] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.429188] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.442127] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.442142] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.456604] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.456620] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.469967] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.469982] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.484335] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.484350] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.497325] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.497340] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.510599] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.510615] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.524417] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.524433] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.537300] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.537316] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.384 [2024-12-06 11:33:16.550106] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.384 [2024-12-06 11:33:16.550128] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.564769] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.564784] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.577600] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.577614] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.592246] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.592261] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.605384] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.605399] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.618277] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.618292] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.632244] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.632258] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.645338] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.645353] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.658297] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.658312] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.672414] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.672429] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.685232] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.685246] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.697948] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.697962] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.712330] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.712345] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.725365] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.725380] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.738115] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.738129] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.752792] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.752806] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.765897] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.765912] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.780729] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.780744] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.793612] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.793626] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.646 [2024-12-06 11:33:16.808209] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.646 [2024-12-06 11:33:16.808228] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.821035] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.821051] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.834047] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.834061] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.848285] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.848300] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.861175] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.861190] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.873853] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.873871] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.888262] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.888277] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.901314] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.901329] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.914460] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.914475] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.928170] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.928185] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.941335] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.941350] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.953988] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.954002] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.968773] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.968788] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.981520] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.981535] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:16.995895] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:16.995910] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:17.008924] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.906 [2024-12-06 11:33:17.008939] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.906 [2024-12-06 11:33:17.021966] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.907 [2024-12-06 11:33:17.021981] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.907 [2024-12-06 11:33:17.036416] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.907 [2024-12-06 11:33:17.036430] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.907 [2024-12-06 11:33:17.049484] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.907 [2024-12-06 11:33:17.049498] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.907 [2024-12-06 11:33:17.064358] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.907 [2024-12-06 11:33:17.064372] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.077387] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.077402] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.090013] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.090028] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.104262] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.104277] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.117541] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.117556] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.131819] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.131834] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.144673] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.144688] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.157199] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.157214] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.169946] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.169961] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.183884] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.183899] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.196941] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.196955] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.209998] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.210012] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.224876] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.224892] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.238119] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.238133] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 19121.00 IOPS, 149.38 MiB/s [2024-12-06T10:33:17.423Z] [2024-12-06 11:33:17.252173] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.252188] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.265080] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.265094] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.278029] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.278043] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.292283] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.292298] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.305203] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.305217] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.317822] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.317836] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.332186] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.332200] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.345425] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.345439] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.360055] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.360069] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.373145] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.373161] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.256 [2024-12-06 11:33:17.385939] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.256 [2024-12-06 11:33:17.385962] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.257 [2024-12-06 11:33:17.400357] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.257 [2024-12-06 11:33:17.400372] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.413410] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.413425] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.428054] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.428068] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.441301] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.441316] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.454059] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.454074] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.468508] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.468523] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.481534] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.481549] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.495921] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.495935] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.508869] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.508884] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.521392] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.521407] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.534179] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.534193] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.548331] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.548346] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.561313] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.561333] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.574154] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.574169] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.588247] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.588262] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.601124] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.601139] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.613945] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.613959] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.628294] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.628310] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.641227] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.641243] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.654164] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.654179] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.668592] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.668607] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.530 [2024-12-06 11:33:17.681777] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.530 [2024-12-06 11:33:17.681791] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.818 [2024-12-06 11:33:17.696193] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.818 [2024-12-06 11:33:17.696209] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.818 [2024-12-06 11:33:17.709203] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.818 [2024-12-06 11:33:17.709217] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.818 [2024-12-06 11:33:17.722210] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.818 [2024-12-06 11:33:17.722224] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.818 [2024-12-06 11:33:17.736605] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.818 [2024-12-06 11:33:17.736621] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.818 [2024-12-06 11:33:17.749455] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.818 [2024-12-06 11:33:17.749469] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.818 [2024-12-06 11:33:17.764279] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.818 [2024-12-06 11:33:17.764294] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.818 [2024-12-06 11:33:17.777215] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.818 [2024-12-06 11:33:17.777231] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.818 [2024-12-06 11:33:17.790245] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.818 [2024-12-06 11:33:17.790260] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.818 [2024-12-06 11:33:17.804355] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.818 [2024-12-06 11:33:17.804370] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.818 [2024-12-06 11:33:17.817309] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.818 [2024-12-06 11:33:17.817327] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.818 [2024-12-06 11:33:17.830104] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.830119] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.819 [2024-12-06 11:33:17.844062] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.844077] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.819 [2024-12-06 11:33:17.856836] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.856851] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.819 [2024-12-06 11:33:17.869802] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.869816] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.819 [2024-12-06 11:33:17.884311] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.884326] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.819 [2024-12-06 11:33:17.897296] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.897311] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.819 [2024-12-06 11:33:17.909948] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.909962] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.819 [2024-12-06 11:33:17.924102] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.924117] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.819 [2024-12-06 11:33:17.937143] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.937157] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.819 [2024-12-06 11:33:17.949580] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.949594] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.819 [2024-12-06 11:33:17.964834] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.964849] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:11.819 [2024-12-06 11:33:17.977816] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:11.819 [2024-12-06 11:33:17.977831] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:17.992748] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:17.992763] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.005905] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.005920] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.019849] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.019869] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.032711] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.032726] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.045315] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.045331] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.058151] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.058165] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.072031] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.072050] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.085244] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.085259] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.098624] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.098639] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.112387] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.112402] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.125651] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.125666] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.140777] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.140792] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.153680] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.153695] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.168162] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.168177] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.180899] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.180914] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.193439] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.193453] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.208253] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.208268] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.221038] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.221053] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.080 [2024-12-06 11:33:18.233815] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.080 [2024-12-06 11:33:18.233829] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.248202] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.248217] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 19152.50 IOPS, 149.63 MiB/s [2024-12-06T10:33:18.510Z] [2024-12-06 11:33:18.260640] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.260655] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.273345] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.273360] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.286017] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.286031] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.300141] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.300156] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.313136] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.313151] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.326501] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.326520] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.340303] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.340317] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.353085] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.353099] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.366107] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.366122] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.380098] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.380113] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.393169] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.393183] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.406460] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.406474] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.420268] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.420282] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.433331] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.433346] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.446186] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.446201] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.460269] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.460283] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.473282] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.473297] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.485961] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.485976] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.343 [2024-12-06 11:33:18.500426] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.343 [2024-12-06 11:33:18.500441] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.513414] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.513428] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.528021] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.528036] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.541165] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.541180] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.554316] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.554330] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.568185] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.568199] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.580963] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.580977] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.593464] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.593478] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.607748] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.607762] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.620752] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.620766] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.633545] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.633558] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.647869] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.647884] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.660877] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.660892] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.673883] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.673898] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.688332] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.688347] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.701199] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.701214] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.713873] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.713887] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.728040] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.728056] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.741119] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.741133] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.753805] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.604 [2024-12-06 11:33:18.753819] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.604 [2024-12-06 11:33:18.768343] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.605 [2024-12-06 11:33:18.768357] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.865 [2024-12-06 11:33:18.781130] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.865 [2024-12-06 11:33:18.781144] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.793699] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.793713] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.808263] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.808278] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.821240] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.821255] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.833994] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.834008] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.848574] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.848589] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.861771] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.861786] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.876830] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.876845] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.889792] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.889807] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.904363] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.904378] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.917212] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.917227] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.930128] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.930142] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.944214] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.944230] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.957505] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.957519] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.972143] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.972158] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.985368] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.985383] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:18.998427] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:18.998441] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:19.012282] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:19.012296] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.866 [2024-12-06 11:33:19.025465] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:12.866 [2024-12-06 11:33:19.025479] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.040423] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.040438] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.053396] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.053410] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.066179] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.066194] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.080445] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.080460] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.093627] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.093642] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.107985] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.108000] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.120997] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.121012] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.133895] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.133911] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.147850] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.147870] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.160805] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.160820] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.173593] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.173607] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.188037] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.188052] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.201109] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.201123] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.214215] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.214229] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.228395] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.228410] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.241463] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.241476] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 19157.00 IOPS, 149.66 MiB/s [2024-12-06T10:33:19.294Z] [2024-12-06 11:33:19.256565] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.256580] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.269712] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.269726] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.127 [2024-12-06 11:33:19.283856] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.127 [2024-12-06 11:33:19.283875] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.389 [2024-12-06 11:33:19.297025] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.389 [2024-12-06 11:33:19.297040] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.389 [2024-12-06 11:33:19.309505] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.389 [2024-12-06 11:33:19.309520] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.389 [2024-12-06 11:33:19.323848] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.389 [2024-12-06 11:33:19.323868] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.389 [2024-12-06 11:33:19.336900] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.389 [2024-12-06 11:33:19.336919] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.389 [2024-12-06 11:33:19.350167] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.389 [2024-12-06 11:33:19.350181] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.389 [2024-12-06 11:33:19.364179] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.389 [2024-12-06 11:33:19.364194] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.389 [2024-12-06 11:33:19.377451] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.389 [2024-12-06 11:33:19.377466] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.389 [2024-12-06 11:33:19.391873] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.389 [2024-12-06 11:33:19.391889] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.404622] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.404638] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.417301] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.417316] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.430019] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.430033] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.444422] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.444437] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.457499] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.457513] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.472834] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.472849] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.485661] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.485675] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.500336] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.500351] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.513573] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.513588] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.528057] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.528072] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.541256] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.541271] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.390 [2024-12-06 11:33:19.554139] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.390 [2024-12-06 11:33:19.554155] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.568063] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.568078] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.580901] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.580916] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.593770] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.593788] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.607786] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.607802] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.620530] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.620545] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.633143] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.633157] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.645955] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.645970] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.659995] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.660010] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.672579] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.672594] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.686002] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.686017] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.700605] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.700620] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.713769] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.713784] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.728036] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.728051] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.740838] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.740853] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.754002] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.754016] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.767991] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.768006] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.780966] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.780982] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.793708] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.793722] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.651 [2024-12-06 11:33:19.808622] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.651 [2024-12-06 11:33:19.808637] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.821523] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.821538] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.836504] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.836519] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.849415] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.849433] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.864097] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.864113] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.877195] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.877210] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.889908] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.889922] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.903978] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.903993] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.917201] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.917216] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.929963] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.929978] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.944653] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.944668] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.957725] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.957740] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.971870] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.971885] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.984751] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.984766] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:19.998199] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:19.998214] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:20.013426] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:20.013444] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:20.027780] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:20.027796] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:20.040993] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:20.041009] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:20.054166] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:20.054181] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:13.912 [2024-12-06 11:33:20.068792] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:13.912 [2024-12-06 11:33:20.068807] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.173 [2024-12-06 11:33:20.081782] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.173 [2024-12-06 11:33:20.081797] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.173 [2024-12-06 11:33:20.095525] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.173 [2024-12-06 11:33:20.095541] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.173 [2024-12-06 11:33:20.108748] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.173 [2024-12-06 11:33:20.108763] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.173 [2024-12-06 11:33:20.121643] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.173 [2024-12-06 11:33:20.121658] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.136407] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.136423] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.149621] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.149636] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.163932] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.163948] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.176751] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.176766] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.189372] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.189387] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.202126] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.202141] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.216228] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.216243] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.229203] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.229217] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.242374] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.242388] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 19139.00 IOPS, 149.52 MiB/s [2024-12-06T10:33:20.341Z] [2024-12-06 11:33:20.256287] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.256302] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.269037] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.269052] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.281749] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.281763] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.295887] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.295902] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.308646] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.308661] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.321574] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.321589] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.174 [2024-12-06 11:33:20.336350] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.174 [2024-12-06 11:33:20.336365] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.434 [2024-12-06 11:33:20.349393] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.434 [2024-12-06 11:33:20.349408] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.434 [2024-12-06 11:33:20.362614] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.434 [2024-12-06 11:33:20.362673] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.376617] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.376632] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.389614] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.389628] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.404226] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.404241] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.417125] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.417139] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.430435] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.430450] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.444299] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.444314] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.457603] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.457616] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.472226] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.472241] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.485057] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.485072] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.497886] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.497900] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.512371] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.512385] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.525265] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.525279] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.538012] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.538026] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.552619] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.552634] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.565584] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.565598] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.580515] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.580530] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.435 [2024-12-06 11:33:20.593663] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.435 [2024-12-06 11:33:20.593678] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.608773] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.608788] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.621611] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.621625] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.635928] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.635943] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.648757] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.648772] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.661911] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.661925] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.676227] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.676241] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.689400] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.689414] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.701860] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.701879] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.716329] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.716343] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.729246] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.729261] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.742072] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.742086] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.756130] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.756145] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.769089] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.769105] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.781784] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.781798] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.796173] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.796187] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.809276] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.809291] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.821923] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.821938] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.836365] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.836379] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.695 [2024-12-06 11:33:20.849465] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.695 [2024-12-06 11:33:20.849480] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:20.864747] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:20.864766] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:20.877873] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:20.877888] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:20.892454] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:20.892469] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:20.905462] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:20.905476] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:20.920449] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:20.920464] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:20.933166] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:20.933181] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:20.945972] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:20.945987] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:20.960616] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:20.960631] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:20.973612] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:20.973625] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:20.988189] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:20.988204] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:21.001240] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:21.001255] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:21.014430] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:21.014445] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:21.028428] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:21.028442] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.956 [2024-12-06 11:33:21.041384] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.956 [2024-12-06 11:33:21.041398] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.957 [2024-12-06 11:33:21.054269] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.957 [2024-12-06 11:33:21.054284] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.957 [2024-12-06 11:33:21.068626] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.957 [2024-12-06 11:33:21.068641] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.957 [2024-12-06 11:33:21.081515] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.957 [2024-12-06 11:33:21.081529] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.957 [2024-12-06 11:33:21.096401] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.957 [2024-12-06 11:33:21.096415] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.957 [2024-12-06 11:33:21.109302] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.957 [2024-12-06 11:33:21.109317] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.957 [2024-12-06 11:33:21.121933] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.957 [2024-12-06 11:33:21.121952] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.136186] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.136201] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.149132] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.149147] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.161909] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.161924] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.176846] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.176866] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.190024] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.190039] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.204307] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.204322] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.217600] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.217616] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.231898] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.231913] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.244881] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.244896] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 19139.40 IOPS, 149.53 MiB/s [2024-12-06T10:33:21.385Z] [2024-12-06 11:33:21.257106] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.257121] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 00:34:15.218 Latency(us) 00:34:15.218 [2024-12-06T10:33:21.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.218 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:15.218 Nvme1n1 : 5.01 19142.10 149.55 0.00 0.00 6679.70 2512.21 12615.68 00:34:15.218 [2024-12-06T10:33:21.385Z] =================================================================================================================== 00:34:15.218 [2024-12-06T10:33:21.385Z] Total : 19142.10 149.55 0.00 0.00 6679.70 2512.21 12615.68 00:34:15.218 [2024-12-06 11:33:21.265164] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.265178] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.277161] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.277173] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.289166] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.289178] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.301165] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.301179] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.313162] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.313173] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.325159] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.325170] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.337158] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.337167] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.349159] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.349167] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.361161] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.361171] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 [2024-12-06 11:33:21.373158] subsystem.c:2280:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.218 [2024-12-06 11:33:21.373165] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3703851) - No such process 00:34:15.218 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3703851 00:34:15.218 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:15.218 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.218 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.479 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.479 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:15.479 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.479 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.479 delay0 00:34:15.479 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.479 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:15.479 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.479 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:15.479 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.479 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:15.479 [2024-12-06 11:33:21.560062] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:23.618 Initializing NVMe Controllers 00:34:23.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:23.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:23.618 Initialization complete. Launching workers. 00:34:23.618 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6827 00:34:23.618 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7105, failed to submit 42 00:34:23.618 success 6948, unsuccessful 157, failed 0 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:23.618 rmmod nvme_tcp 00:34:23.618 rmmod nvme_fabrics 00:34:23.618 rmmod nvme_keyring 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3701316 ']' 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3701316 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3701316 ']' 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3701316 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3701316 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3701316' 00:34:23.618 killing process with pid 3701316 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3701316 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3701316 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:23.618 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.005 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:25.005 00:34:25.005 real 0m35.367s 00:34:25.005 user 0m44.809s 00:34:25.005 sys 0m12.686s 00:34:25.005 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:25.005 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:25.005 ************************************ 00:34:25.005 END TEST nvmf_zcopy 00:34:25.005 ************************************ 00:34:25.005 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:25.005 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:25.005 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.005 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:25.005 ************************************ 00:34:25.005 START TEST nvmf_nmic 00:34:25.005 ************************************ 00:34:25.005 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:25.268 * Looking for test storage... 00:34:25.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:25.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.268 --rc genhtml_branch_coverage=1 00:34:25.268 --rc genhtml_function_coverage=1 00:34:25.268 --rc genhtml_legend=1 00:34:25.268 --rc geninfo_all_blocks=1 00:34:25.268 --rc geninfo_unexecuted_blocks=1 00:34:25.268 00:34:25.268 ' 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:25.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.268 --rc genhtml_branch_coverage=1 00:34:25.268 --rc genhtml_function_coverage=1 00:34:25.268 --rc genhtml_legend=1 00:34:25.268 --rc geninfo_all_blocks=1 00:34:25.268 --rc geninfo_unexecuted_blocks=1 00:34:25.268 00:34:25.268 ' 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:25.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.268 --rc genhtml_branch_coverage=1 00:34:25.268 --rc genhtml_function_coverage=1 00:34:25.268 --rc genhtml_legend=1 00:34:25.268 --rc geninfo_all_blocks=1 00:34:25.268 --rc geninfo_unexecuted_blocks=1 00:34:25.268 00:34:25.268 ' 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:25.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.268 --rc genhtml_branch_coverage=1 00:34:25.268 --rc genhtml_function_coverage=1 00:34:25.268 --rc genhtml_legend=1 00:34:25.268 --rc geninfo_all_blocks=1 00:34:25.268 --rc geninfo_unexecuted_blocks=1 00:34:25.268 00:34:25.268 ' 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.268 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:25.269 11:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:33.413 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:33.413 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:33.413 Found net devices under 0000:31:00.0: cvl_0_0 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:33.413 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:33.414 Found net devices under 0000:31:00.1: cvl_0_1 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:33.414 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:33.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:34:33.702 00:34:33.702 --- 10.0.0.2 ping statistics --- 00:34:33.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.702 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:33.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:34:33.702 00:34:33.702 --- 10.0.0.1 ping statistics --- 00:34:33.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.702 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3711049 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3711049 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:33.702 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3711049 ']' 00:34:33.964 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.964 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:33.964 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.964 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:33.964 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.964 [2024-12-06 11:33:39.902930] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:33.964 [2024-12-06 11:33:39.904116] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:34:33.964 [2024-12-06 11:33:39.904169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:33.964 [2024-12-06 11:33:39.998744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:33.964 [2024-12-06 11:33:40.046634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:33.964 [2024-12-06 11:33:40.046679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:33.964 [2024-12-06 11:33:40.046687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:33.964 [2024-12-06 11:33:40.046694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:33.964 [2024-12-06 11:33:40.046700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:33.964 [2024-12-06 11:33:40.048334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.964 [2024-12-06 11:33:40.048458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:33.964 [2024-12-06 11:33:40.048619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:33.964 [2024-12-06 11:33:40.048620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:33.964 [2024-12-06 11:33:40.109277] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:33.964 [2024-12-06 11:33:40.109448] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:33.964 [2024-12-06 11:33:40.110347] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:33.964 [2024-12-06 11:33:40.111129] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:33.964 [2024-12-06 11:33:40.111159] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:34.534 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:34.534 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:34.534 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:34.534 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.534 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.794 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:34.794 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:34.794 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.794 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.794 [2024-12-06 11:33:40.741088] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.794 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.794 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:34.794 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.794 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.794 Malloc0 00:34:34.794 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.795 [2024-12-06 11:33:40.813279] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:34.795 test case1: single bdev can't be used in multiple subsystems 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.795 [2024-12-06 11:33:40.849020] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:34.795 [2024-12-06 11:33:40.849045] subsystem.c:2310:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:34.795 [2024-12-06 11:33:40.849054] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.795 request: 00:34:34.795 { 00:34:34.795 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:34.795 "namespace": { 00:34:34.795 "bdev_name": "Malloc0", 00:34:34.795 "no_auto_visible": false, 00:34:34.795 "hide_metadata": false 00:34:34.795 }, 00:34:34.795 "method": "nvmf_subsystem_add_ns", 00:34:34.795 "req_id": 1 00:34:34.795 } 00:34:34.795 Got JSON-RPC error response 00:34:34.795 response: 00:34:34.795 { 00:34:34.795 "code": -32602, 00:34:34.795 "message": "Invalid parameters" 00:34:34.795 } 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:34.795 Adding namespace failed - expected result. 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:34.795 test case2: host connect to nvmf target in multiple paths 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:34.795 [2024-12-06 11:33:40.861126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.795 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:35.055 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:35.685 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:35.685 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:35.685 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:35.685 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:35.685 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:37.594 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:37.594 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:37.594 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:37.594 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:37.594 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:37.594 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:37.594 11:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:37.594 [global] 00:34:37.594 thread=1 00:34:37.594 invalidate=1 00:34:37.594 rw=write 00:34:37.594 time_based=1 00:34:37.594 runtime=1 00:34:37.594 ioengine=libaio 00:34:37.594 direct=1 00:34:37.594 bs=4096 00:34:37.594 iodepth=1 00:34:37.594 norandommap=0 00:34:37.594 numjobs=1 00:34:37.594 00:34:37.594 verify_dump=1 00:34:37.594 verify_backlog=512 00:34:37.594 verify_state_save=0 00:34:37.594 do_verify=1 00:34:37.594 verify=crc32c-intel 00:34:37.594 [job0] 00:34:37.594 filename=/dev/nvme0n1 00:34:37.594 Could not set queue depth (nvme0n1) 00:34:37.854 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:37.854 fio-3.35 00:34:37.854 Starting 1 thread 00:34:39.238 00:34:39.238 job0: (groupid=0, jobs=1): err= 0: pid=3712016: Fri Dec 6 11:33:45 2024 00:34:39.238 read: IOPS=198, BW=794KiB/s (813kB/s)(796KiB/1002msec) 00:34:39.238 slat (nsec): min=7450, max=51901, avg=25527.95, stdev=4585.55 00:34:39.238 clat (usec): min=586, max=41316, avg=3972.99, stdev=10595.52 00:34:39.238 lat (usec): min=612, max=41343, avg=3998.52, stdev=10596.01 00:34:39.238 clat percentiles (usec): 00:34:39.238 | 1.00th=[ 685], 5.00th=[ 791], 10.00th=[ 857], 20.00th=[ 930], 00:34:39.238 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:34:39.238 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1090], 95.00th=[41157], 00:34:39.238 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:39.238 | 99.99th=[41157] 00:34:39.238 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:34:39.238 slat (nsec): min=8858, max=68049, avg=29037.22, stdev=10857.43 00:34:39.238 clat (usec): min=145, max=595, avg=362.98, stdev=94.83 00:34:39.238 lat (usec): min=156, max=632, avg=392.01, stdev=98.40 00:34:39.238 clat percentiles (usec): 00:34:39.238 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 247], 20.00th=[ 289], 00:34:39.238 | 30.00th=[ 302], 40.00th=[ 314], 50.00th=[ 347], 60.00th=[ 404], 00:34:39.238 | 70.00th=[ 412], 80.00th=[ 437], 90.00th=[ 506], 95.00th=[ 529], 00:34:39.238 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 594], 99.95th=[ 594], 00:34:39.238 | 99.99th=[ 594] 00:34:39.238 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:39.238 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:39.238 lat (usec) : 250=7.45%, 500=56.40%, 750=8.86%, 1000=19.27% 00:34:39.238 lat (msec) : 2=5.91%, 50=2.11% 00:34:39.238 cpu : usr=1.40%, sys=2.30%, ctx=711, majf=0, minf=1 00:34:39.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:39.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.238 issued rwts: total=199,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:39.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:39.238 00:34:39.238 Run status group 0 (all jobs): 00:34:39.238 READ: bw=794KiB/s (813kB/s), 794KiB/s-794KiB/s (813kB/s-813kB/s), io=796KiB (815kB), run=1002-1002msec 00:34:39.238 WRITE: bw=2044KiB/s (2093kB/s), 2044KiB/s-2044KiB/s (2093kB/s-2093kB/s), io=2048KiB (2097kB), run=1002-1002msec 00:34:39.238 00:34:39.238 Disk stats (read/write): 00:34:39.238 nvme0n1: ios=244/512, merge=0/0, ticks=1052/130, in_queue=1182, util=97.60% 00:34:39.238 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:39.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:39.238 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:39.238 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:39.238 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:39.238 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:39.238 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:39.238 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:39.238 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:39.238 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:39.239 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:39.239 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:39.239 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:39.239 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:39.239 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:39.239 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:39.239 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:39.239 rmmod nvme_tcp 00:34:39.239 rmmod nvme_fabrics 00:34:39.239 rmmod nvme_keyring 00:34:39.239 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3711049 ']' 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3711049 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3711049 ']' 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3711049 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3711049 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3711049' 00:34:39.499 killing process with pid 3711049 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3711049 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3711049 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.499 11:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.043 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:42.043 00:34:42.043 real 0m16.568s 00:34:42.043 user 0m33.058s 00:34:42.043 sys 0m8.243s 00:34:42.043 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:42.043 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:42.043 ************************************ 00:34:42.043 END TEST nvmf_nmic 00:34:42.043 ************************************ 00:34:42.043 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:42.043 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:42.043 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:42.043 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:42.043 ************************************ 00:34:42.043 START TEST nvmf_fio_target 00:34:42.043 ************************************ 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:42.044 * Looking for test storage... 00:34:42.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:42.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:42.044 --rc genhtml_branch_coverage=1 00:34:42.044 --rc genhtml_function_coverage=1 00:34:42.044 --rc genhtml_legend=1 00:34:42.044 --rc geninfo_all_blocks=1 00:34:42.044 --rc geninfo_unexecuted_blocks=1 00:34:42.044 00:34:42.044 ' 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:42.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:42.044 --rc genhtml_branch_coverage=1 00:34:42.044 --rc genhtml_function_coverage=1 00:34:42.044 --rc genhtml_legend=1 00:34:42.044 --rc geninfo_all_blocks=1 00:34:42.044 --rc geninfo_unexecuted_blocks=1 00:34:42.044 00:34:42.044 ' 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:42.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:42.044 --rc genhtml_branch_coverage=1 00:34:42.044 --rc genhtml_function_coverage=1 00:34:42.044 --rc genhtml_legend=1 00:34:42.044 --rc geninfo_all_blocks=1 00:34:42.044 --rc geninfo_unexecuted_blocks=1 00:34:42.044 00:34:42.044 ' 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:42.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:42.044 --rc genhtml_branch_coverage=1 00:34:42.044 --rc genhtml_function_coverage=1 00:34:42.044 --rc genhtml_legend=1 00:34:42.044 --rc geninfo_all_blocks=1 00:34:42.044 --rc geninfo_unexecuted_blocks=1 00:34:42.044 00:34:42.044 ' 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:42.044 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.045 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:42.045 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:42.045 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:42.045 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:42.045 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:42.045 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:42.045 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:42.045 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:42.045 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:42.045 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:42.045 11:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:42.045 11:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:50.190 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:50.190 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:50.190 Found net devices under 0000:31:00.0: cvl_0_0 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:50.190 Found net devices under 0000:31:00.1: cvl_0_1 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.190 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.190 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.190 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.190 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:50.190 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.190 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.190 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.190 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:50.190 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:50.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:34:50.190 00:34:50.190 --- 10.0.0.2 ping statistics --- 00:34:50.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.190 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:34:50.191 00:34:50.191 --- 10.0.0.1 ping statistics --- 00:34:50.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.191 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3716947 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3716947 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3716947 ']' 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:50.191 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 [2024-12-06 11:33:56.283246] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:50.191 [2024-12-06 11:33:56.284316] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:34:50.191 [2024-12-06 11:33:56.284356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:50.450 [2024-12-06 11:33:56.376260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:50.450 [2024-12-06 11:33:56.412182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:50.450 [2024-12-06 11:33:56.412214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:50.450 [2024-12-06 11:33:56.412222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:50.450 [2024-12-06 11:33:56.412229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:50.450 [2024-12-06 11:33:56.412235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:50.450 [2024-12-06 11:33:56.413724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.450 [2024-12-06 11:33:56.413860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:50.450 [2024-12-06 11:33:56.414031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:50.450 [2024-12-06 11:33:56.414154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.450 [2024-12-06 11:33:56.470401] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:50.450 [2024-12-06 11:33:56.470534] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:50.450 [2024-12-06 11:33:56.471443] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:50.450 [2024-12-06 11:33:56.472192] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:50.450 [2024-12-06 11:33:56.472258] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:51.019 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:51.019 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:51.019 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:51.019 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:51.019 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:51.019 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.019 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:51.280 [2024-12-06 11:33:57.262944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.280 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:51.541 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:51.541 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:51.541 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:51.541 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:51.801 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:51.801 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:52.061 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:52.061 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:52.061 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:52.321 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:52.321 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:52.581 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:52.581 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:52.581 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:52.581 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:52.845 11:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:53.106 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:53.106 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:53.106 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:53.106 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:53.367 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.627 [2024-12-06 11:33:59.550760] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.627 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:53.627 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:53.888 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:54.460 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:54.460 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:54.460 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:54.460 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:54.460 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:54.460 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:56.370 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:56.370 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:56.370 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:56.370 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:56.370 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:56.370 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:56.370 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:56.370 [global] 00:34:56.370 thread=1 00:34:56.370 invalidate=1 00:34:56.370 rw=write 00:34:56.370 time_based=1 00:34:56.370 runtime=1 00:34:56.370 ioengine=libaio 00:34:56.370 direct=1 00:34:56.370 bs=4096 00:34:56.370 iodepth=1 00:34:56.370 norandommap=0 00:34:56.370 numjobs=1 00:34:56.370 00:34:56.370 verify_dump=1 00:34:56.370 verify_backlog=512 00:34:56.370 verify_state_save=0 00:34:56.370 do_verify=1 00:34:56.370 verify=crc32c-intel 00:34:56.370 [job0] 00:34:56.370 filename=/dev/nvme0n1 00:34:56.370 [job1] 00:34:56.370 filename=/dev/nvme0n2 00:34:56.370 [job2] 00:34:56.370 filename=/dev/nvme0n3 00:34:56.370 [job3] 00:34:56.370 filename=/dev/nvme0n4 00:34:56.370 Could not set queue depth (nvme0n1) 00:34:56.370 Could not set queue depth (nvme0n2) 00:34:56.370 Could not set queue depth (nvme0n3) 00:34:56.370 Could not set queue depth (nvme0n4) 00:34:56.956 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:56.956 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:56.956 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:56.956 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:56.956 fio-3.35 00:34:56.956 Starting 4 threads 00:34:58.338 00:34:58.338 job0: (groupid=0, jobs=1): err= 0: pid=3718314: Fri Dec 6 11:34:04 2024 00:34:58.338 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:58.338 slat (nsec): min=25806, max=76858, avg=26727.86, stdev=3361.40 00:34:58.338 clat (usec): min=773, max=1242, avg=1011.38, stdev=80.08 00:34:58.338 lat (usec): min=799, max=1268, avg=1038.11, stdev=79.94 00:34:58.338 clat percentiles (usec): 00:34:58.338 | 1.00th=[ 799], 5.00th=[ 840], 10.00th=[ 898], 20.00th=[ 971], 00:34:58.338 | 30.00th=[ 988], 40.00th=[ 1004], 50.00th=[ 1012], 60.00th=[ 1029], 00:34:58.338 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:34:58.338 | 99.00th=[ 1172], 99.50th=[ 1172], 99.90th=[ 1237], 99.95th=[ 1237], 00:34:58.338 | 99.99th=[ 1237] 00:34:58.338 write: IOPS=683, BW=2733KiB/s (2799kB/s)(2736KiB/1001msec); 0 zone resets 00:34:58.338 slat (nsec): min=9984, max=53270, avg=30337.38, stdev=9764.20 00:34:58.338 clat (usec): min=341, max=993, avg=640.23, stdev=116.92 00:34:58.338 lat (usec): min=359, max=1028, avg=670.57, stdev=121.70 00:34:58.338 clat percentiles (usec): 00:34:58.338 | 1.00th=[ 359], 5.00th=[ 416], 10.00th=[ 486], 20.00th=[ 537], 00:34:58.338 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 676], 00:34:58.338 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 816], 00:34:58.338 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 996], 99.95th=[ 996], 00:34:58.338 | 99.99th=[ 996] 00:34:58.338 bw ( KiB/s): min= 4096, max= 4096, per=46.50%, avg=4096.00, stdev= 0.00, samples=1 00:34:58.338 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:58.338 lat (usec) : 500=8.36%, 750=39.13%, 1000=26.67% 00:34:58.338 lat (msec) : 2=25.84% 00:34:58.338 cpu : usr=1.90%, sys=3.40%, ctx=1199, majf=0, minf=1 00:34:58.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.338 issued rwts: total=512,684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:58.338 job1: (groupid=0, jobs=1): err= 0: pid=3718316: Fri Dec 6 11:34:04 2024 00:34:58.338 read: IOPS=182, BW=731KiB/s (749kB/s)(732KiB/1001msec) 00:34:58.338 slat (nsec): min=23933, max=60400, avg=25395.98, stdev=3468.88 00:34:58.338 clat (usec): min=893, max=41999, avg=3594.20, stdev=9587.89 00:34:58.338 lat (usec): min=919, max=42024, avg=3619.59, stdev=9587.76 00:34:58.338 clat percentiles (usec): 00:34:58.338 | 1.00th=[ 938], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1106], 00:34:58.338 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:34:58.338 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1303], 95.00th=[41157], 00:34:58.338 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:58.338 | 99.99th=[42206] 00:34:58.338 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:58.338 slat (nsec): min=9849, max=66355, avg=31341.09, stdev=8028.19 00:34:58.338 clat (usec): min=250, max=1000, avg=619.37, stdev=152.05 00:34:58.338 lat (usec): min=283, max=1034, avg=650.72, stdev=152.53 00:34:58.338 clat percentiles (usec): 00:34:58.338 | 1.00th=[ 285], 5.00th=[ 359], 10.00th=[ 420], 20.00th=[ 486], 00:34:58.338 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 676], 00:34:58.338 | 70.00th=[ 709], 80.00th=[ 758], 90.00th=[ 816], 95.00th=[ 865], 00:34:58.338 | 99.00th=[ 947], 99.50th=[ 988], 99.90th=[ 1004], 99.95th=[ 1004], 00:34:58.338 | 99.99th=[ 1004] 00:34:58.338 bw ( KiB/s): min= 4096, max= 4096, per=46.50%, avg=4096.00, stdev= 0.00, samples=1 00:34:58.338 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:58.338 lat (usec) : 500=16.98%, 750=41.44%, 1000=15.97% 00:34:58.338 lat (msec) : 2=24.03%, 50=1.58% 00:34:58.338 cpu : usr=1.20%, sys=1.80%, ctx=695, majf=0, minf=1 00:34:58.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.338 issued rwts: total=183,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:58.338 job2: (groupid=0, jobs=1): err= 0: pid=3718329: Fri Dec 6 11:34:04 2024 00:34:58.338 read: IOPS=14, BW=59.8KiB/s (61.2kB/s)(60.0KiB/1004msec) 00:34:58.338 slat (nsec): min=25464, max=26427, avg=25950.60, stdev=263.39 00:34:58.338 clat (usec): min=41877, max=42900, avg=42030.90, stdev=244.92 00:34:58.338 lat (usec): min=41902, max=42926, avg=42056.85, stdev=244.89 00:34:58.338 clat percentiles (usec): 00:34:58.338 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:34:58.338 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:58.338 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:34:58.338 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:58.338 | 99.99th=[42730] 00:34:58.338 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:34:58.338 slat (nsec): min=9837, max=73247, avg=31991.66, stdev=8115.36 00:34:58.338 clat (usec): min=184, max=1098, avg=688.53, stdev=158.91 00:34:58.338 lat (usec): min=218, max=1134, avg=720.52, stdev=161.56 00:34:58.338 clat percentiles (usec): 00:34:58.338 | 1.00th=[ 281], 5.00th=[ 412], 10.00th=[ 474], 20.00th=[ 545], 00:34:58.338 | 30.00th=[ 611], 40.00th=[ 668], 50.00th=[ 725], 60.00th=[ 750], 00:34:58.338 | 70.00th=[ 783], 80.00th=[ 816], 90.00th=[ 873], 95.00th=[ 922], 00:34:58.338 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1106], 99.95th=[ 1106], 00:34:58.338 | 99.99th=[ 1106] 00:34:58.338 bw ( KiB/s): min= 4096, max= 4096, per=46.50%, avg=4096.00, stdev= 0.00, samples=1 00:34:58.338 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:58.338 lat (usec) : 250=0.57%, 500=11.57%, 750=45.54%, 1000=38.71% 00:34:58.338 lat (msec) : 2=0.76%, 50=2.85% 00:34:58.338 cpu : usr=0.80%, sys=1.60%, ctx=528, majf=0, minf=1 00:34:58.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.338 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:58.338 job3: (groupid=0, jobs=1): err= 0: pid=3718336: Fri Dec 6 11:34:04 2024 00:34:58.338 read: IOPS=18, BW=75.4KiB/s (77.2kB/s)(76.0KiB/1008msec) 00:34:58.338 slat (nsec): min=26469, max=28732, avg=27013.42, stdev=518.68 00:34:58.338 clat (usec): min=40763, max=41920, avg=41016.81, stdev=253.64 00:34:58.338 lat (usec): min=40790, max=41946, avg=41043.83, stdev=253.55 00:34:58.338 clat percentiles (usec): 00:34:58.338 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:34:58.338 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:58.338 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:34:58.338 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:58.338 | 99.99th=[41681] 00:34:58.338 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:34:58.338 slat (nsec): min=9579, max=59091, avg=32466.40, stdev=9013.19 00:34:58.338 clat (usec): min=146, max=778, avg=404.63, stdev=126.97 00:34:58.338 lat (usec): min=159, max=814, avg=437.10, stdev=128.77 00:34:58.338 clat percentiles (usec): 00:34:58.338 | 1.00th=[ 161], 5.00th=[ 239], 10.00th=[ 255], 20.00th=[ 289], 00:34:58.338 | 30.00th=[ 318], 40.00th=[ 355], 50.00th=[ 392], 60.00th=[ 433], 00:34:58.338 | 70.00th=[ 469], 80.00th=[ 515], 90.00th=[ 586], 95.00th=[ 635], 00:34:58.338 | 99.00th=[ 725], 99.50th=[ 766], 99.90th=[ 783], 99.95th=[ 783], 00:34:58.338 | 99.99th=[ 783] 00:34:58.338 bw ( KiB/s): min= 4096, max= 4096, per=46.50%, avg=4096.00, stdev= 0.00, samples=1 00:34:58.338 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:58.338 lat (usec) : 250=7.72%, 500=66.85%, 750=21.28%, 1000=0.56% 00:34:58.338 lat (msec) : 50=3.58% 00:34:58.338 cpu : usr=1.39%, sys=1.79%, ctx=531, majf=0, minf=1 00:34:58.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.338 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:58.338 00:34:58.338 Run status group 0 (all jobs): 00:34:58.338 READ: bw=2893KiB/s (2962kB/s), 59.8KiB/s-2046KiB/s (61.2kB/s-2095kB/s), io=2916KiB (2986kB), run=1001-1008msec 00:34:58.338 WRITE: bw=8810KiB/s (9021kB/s), 2032KiB/s-2733KiB/s (2081kB/s-2799kB/s), io=8880KiB (9093kB), run=1001-1008msec 00:34:58.338 00:34:58.338 Disk stats (read/write): 00:34:58.338 nvme0n1: ios=510/512, merge=0/0, ticks=946/317, in_queue=1263, util=96.39% 00:34:58.338 nvme0n2: ios=179/512, merge=0/0, ticks=590/292, in_queue=882, util=91.92% 00:34:58.338 nvme0n3: ios=64/512, merge=0/0, ticks=537/330, in_queue=867, util=95.55% 00:34:58.338 nvme0n4: ios=35/512, merge=0/0, ticks=786/157, in_queue=943, util=90.35% 00:34:58.338 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:58.338 [global] 00:34:58.338 thread=1 00:34:58.338 invalidate=1 00:34:58.338 rw=randwrite 00:34:58.338 time_based=1 00:34:58.338 runtime=1 00:34:58.338 ioengine=libaio 00:34:58.338 direct=1 00:34:58.338 bs=4096 00:34:58.338 iodepth=1 00:34:58.338 norandommap=0 00:34:58.338 numjobs=1 00:34:58.338 00:34:58.338 verify_dump=1 00:34:58.338 verify_backlog=512 00:34:58.338 verify_state_save=0 00:34:58.338 do_verify=1 00:34:58.338 verify=crc32c-intel 00:34:58.338 [job0] 00:34:58.338 filename=/dev/nvme0n1 00:34:58.338 [job1] 00:34:58.338 filename=/dev/nvme0n2 00:34:58.338 [job2] 00:34:58.338 filename=/dev/nvme0n3 00:34:58.338 [job3] 00:34:58.338 filename=/dev/nvme0n4 00:34:58.338 Could not set queue depth (nvme0n1) 00:34:58.338 Could not set queue depth (nvme0n2) 00:34:58.338 Could not set queue depth (nvme0n3) 00:34:58.338 Could not set queue depth (nvme0n4) 00:34:58.599 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:58.599 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:58.599 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:58.599 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:58.599 fio-3.35 00:34:58.599 Starting 4 threads 00:34:59.984 00:34:59.984 job0: (groupid=0, jobs=1): err= 0: pid=3718817: Fri Dec 6 11:34:05 2024 00:34:59.985 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2060KiB/1007msec) 00:34:59.985 slat (nsec): min=6814, max=44800, avg=22340.66, stdev=7780.29 00:34:59.985 clat (usec): min=188, max=42028, avg=875.32, stdev=3149.95 00:34:59.985 lat (usec): min=197, max=42053, avg=897.66, stdev=3150.28 00:34:59.985 clat percentiles (usec): 00:34:59.985 | 1.00th=[ 215], 5.00th=[ 322], 10.00th=[ 429], 20.00th=[ 490], 00:34:59.985 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 635], 60.00th=[ 701], 00:34:59.985 | 70.00th=[ 758], 80.00th=[ 799], 90.00th=[ 848], 95.00th=[ 881], 00:34:59.985 | 99.00th=[ 955], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:59.985 | 99.99th=[42206] 00:34:59.985 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:34:59.985 slat (nsec): min=9241, max=63181, avg=27918.16, stdev=9534.78 00:34:59.985 clat (usec): min=148, max=1039, avg=492.65, stdev=152.46 00:34:59.985 lat (usec): min=181, max=1066, avg=520.57, stdev=155.52 00:34:59.985 clat percentiles (usec): 00:34:59.985 | 1.00th=[ 239], 5.00th=[ 285], 10.00th=[ 314], 20.00th=[ 367], 00:34:59.985 | 30.00th=[ 408], 40.00th=[ 441], 50.00th=[ 469], 60.00th=[ 494], 00:34:59.985 | 70.00th=[ 545], 80.00th=[ 627], 90.00th=[ 734], 95.00th=[ 791], 00:34:59.985 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 1037], 99.95th=[ 1037], 00:34:59.985 | 99.99th=[ 1037] 00:34:59.985 bw ( KiB/s): min= 4096, max= 4096, per=38.24%, avg=4096.00, stdev= 0.00, samples=2 00:34:59.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:34:59.985 lat (usec) : 250=2.47%, 500=45.81%, 750=35.28%, 1000=16.05% 00:34:59.985 lat (msec) : 2=0.19%, 50=0.19% 00:34:59.985 cpu : usr=2.78%, sys=3.48%, ctx=1539, majf=0, minf=1 00:34:59.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.985 issued rwts: total=515,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:59.985 job1: (groupid=0, jobs=1): err= 0: pid=3718818: Fri Dec 6 11:34:05 2024 00:34:59.985 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:59.985 slat (nsec): min=6652, max=63039, avg=22518.61, stdev=8267.38 00:34:59.985 clat (usec): min=563, max=2493, avg=1033.82, stdev=191.77 00:34:59.985 lat (usec): min=573, max=2520, avg=1056.34, stdev=193.01 00:34:59.985 clat percentiles (usec): 00:34:59.985 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 938], 00:34:59.985 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1045], 00:34:59.985 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1188], 00:34:59.985 | 99.00th=[ 1975], 99.50th=[ 2311], 99.90th=[ 2507], 99.95th=[ 2507], 00:34:59.985 | 99.99th=[ 2507] 00:34:59.985 write: IOPS=717, BW=2869KiB/s (2938kB/s)(2872KiB/1001msec); 0 zone resets 00:34:59.985 slat (nsec): min=2640, max=44048, avg=16529.33, stdev=9819.92 00:34:59.985 clat (usec): min=232, max=1019, avg=613.99, stdev=125.43 00:34:59.985 lat (usec): min=235, max=1031, avg=630.52, stdev=125.87 00:34:59.985 clat percentiles (usec): 00:34:59.985 | 1.00th=[ 334], 5.00th=[ 400], 10.00th=[ 453], 20.00th=[ 510], 00:34:59.985 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 652], 00:34:59.985 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 816], 00:34:59.985 | 99.00th=[ 922], 99.50th=[ 955], 99.90th=[ 1020], 99.95th=[ 1020], 00:34:59.985 | 99.99th=[ 1020] 00:34:59.985 bw ( KiB/s): min= 4096, max= 4096, per=38.24%, avg=4096.00, stdev= 0.00, samples=1 00:34:59.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:59.985 lat (usec) : 250=0.16%, 500=10.49%, 750=40.65%, 1000=23.01% 00:34:59.985 lat (msec) : 2=25.37%, 4=0.33% 00:34:59.985 cpu : usr=2.20%, sys=3.20%, ctx=1230, majf=0, minf=1 00:34:59.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.985 issued rwts: total=512,718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:59.985 job2: (groupid=0, jobs=1): err= 0: pid=3718821: Fri Dec 6 11:34:05 2024 00:34:59.985 read: IOPS=144, BW=577KiB/s (591kB/s)(596KiB/1033msec) 00:34:59.985 slat (nsec): min=20300, max=26266, avg=25491.23, stdev=487.15 00:34:59.985 clat (usec): min=835, max=41762, avg=4661.72, stdev=11380.21 00:34:59.985 lat (usec): min=861, max=41788, avg=4687.21, stdev=11380.22 00:34:59.985 clat percentiles (usec): 00:34:59.985 | 1.00th=[ 848], 5.00th=[ 963], 10.00th=[ 996], 20.00th=[ 1074], 00:34:59.985 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1221], 00:34:59.985 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1352], 95.00th=[41157], 00:34:59.985 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:59.985 | 99.99th=[41681] 00:34:59.985 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:34:59.985 slat (nsec): min=9927, max=65699, avg=29476.02, stdev=9053.13 00:34:59.985 clat (usec): min=205, max=1082, avg=614.12, stdev=135.94 00:34:59.985 lat (usec): min=239, max=1117, avg=643.59, stdev=138.04 00:34:59.985 clat percentiles (usec): 00:34:59.985 | 1.00th=[ 310], 5.00th=[ 400], 10.00th=[ 437], 20.00th=[ 498], 00:34:59.985 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 660], 00:34:59.985 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 824], 00:34:59.985 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1090], 99.95th=[ 1090], 00:34:59.985 | 99.99th=[ 1090] 00:34:59.985 bw ( KiB/s): min= 4096, max= 4096, per=38.24%, avg=4096.00, stdev= 0.00, samples=1 00:34:59.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:59.985 lat (usec) : 250=0.15%, 500=15.73%, 750=50.68%, 1000=12.71% 00:34:59.985 lat (msec) : 2=18.76%, 50=1.97% 00:34:59.985 cpu : usr=0.97%, sys=1.74%, ctx=662, majf=0, minf=1 00:34:59.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.985 issued rwts: total=149,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:59.985 job3: (groupid=0, jobs=1): err= 0: pid=3718823: Fri Dec 6 11:34:05 2024 00:34:59.985 read: IOPS=18, BW=74.7KiB/s (76.5kB/s)(76.0KiB/1017msec) 00:34:59.985 slat (nsec): min=27905, max=33456, avg=28760.68, stdev=1355.03 00:34:59.985 clat (usec): min=40816, max=43003, avg=41111.36, stdev=510.36 00:34:59.985 lat (usec): min=40845, max=43037, avg=41140.12, stdev=511.36 00:34:59.985 clat percentiles (usec): 00:34:59.985 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:34:59.985 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:59.985 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[43254], 00:34:59.985 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:34:59.985 | 99.99th=[43254] 00:34:59.985 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:34:59.985 slat (nsec): min=5653, max=60484, avg=30362.37, stdev=10667.71 00:34:59.985 clat (usec): min=138, max=1107, avg=420.66, stdev=125.82 00:34:59.985 lat (usec): min=166, max=1117, avg=451.02, stdev=127.16 00:34:59.985 clat percentiles (usec): 00:34:59.985 | 1.00th=[ 202], 5.00th=[ 231], 10.00th=[ 273], 20.00th=[ 318], 00:34:59.985 | 30.00th=[ 347], 40.00th=[ 367], 50.00th=[ 400], 60.00th=[ 441], 00:34:59.985 | 70.00th=[ 482], 80.00th=[ 523], 90.00th=[ 594], 95.00th=[ 644], 00:34:59.985 | 99.00th=[ 725], 99.50th=[ 750], 99.90th=[ 1106], 99.95th=[ 1106], 00:34:59.985 | 99.99th=[ 1106] 00:34:59.985 bw ( KiB/s): min= 4096, max= 4096, per=38.24%, avg=4096.00, stdev= 0.00, samples=1 00:34:59.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:59.985 lat (usec) : 250=6.78%, 500=64.41%, 750=24.67%, 1000=0.38% 00:34:59.985 lat (msec) : 2=0.19%, 50=3.58% 00:34:59.985 cpu : usr=0.98%, sys=1.97%, ctx=533, majf=0, minf=1 00:34:59.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.985 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:59.985 00:34:59.985 Run status group 0 (all jobs): 00:34:59.985 READ: bw=4627KiB/s (4738kB/s), 74.7KiB/s-2046KiB/s (76.5kB/s-2095kB/s), io=4780KiB (4895kB), run=1001-1033msec 00:34:59.985 WRITE: bw=10.5MiB/s (11.0MB/s), 1983KiB/s-4068KiB/s (2030kB/s-4165kB/s), io=10.8MiB (11.3MB), run=1001-1033msec 00:34:59.985 00:34:59.985 Disk stats (read/write): 00:34:59.985 nvme0n1: ios=562/957, merge=0/0, ticks=372/447, in_queue=819, util=87.58% 00:34:59.985 nvme0n2: ios=518/512, merge=0/0, ticks=605/259, in_queue=864, util=96.43% 00:34:59.985 nvme0n3: ios=118/512, merge=0/0, ticks=495/294, in_queue=789, util=88.29% 00:34:59.985 nvme0n4: ios=71/512, merge=0/0, ticks=758/183, in_queue=941, util=97.22% 00:34:59.985 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:59.985 [global] 00:34:59.985 thread=1 00:34:59.985 invalidate=1 00:34:59.985 rw=write 00:34:59.985 time_based=1 00:34:59.985 runtime=1 00:34:59.985 ioengine=libaio 00:34:59.985 direct=1 00:34:59.985 bs=4096 00:34:59.985 iodepth=128 00:34:59.985 norandommap=0 00:34:59.985 numjobs=1 00:34:59.985 00:34:59.985 verify_dump=1 00:34:59.985 verify_backlog=512 00:34:59.985 verify_state_save=0 00:34:59.985 do_verify=1 00:34:59.985 verify=crc32c-intel 00:34:59.985 [job0] 00:34:59.985 filename=/dev/nvme0n1 00:34:59.985 [job1] 00:34:59.985 filename=/dev/nvme0n2 00:34:59.985 [job2] 00:34:59.985 filename=/dev/nvme0n3 00:34:59.985 [job3] 00:34:59.985 filename=/dev/nvme0n4 00:34:59.985 Could not set queue depth (nvme0n1) 00:34:59.985 Could not set queue depth (nvme0n2) 00:34:59.986 Could not set queue depth (nvme0n3) 00:34:59.986 Could not set queue depth (nvme0n4) 00:35:00.245 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:00.245 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:00.245 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:00.245 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:00.245 fio-3.35 00:35:00.245 Starting 4 threads 00:35:01.651 00:35:01.651 job0: (groupid=0, jobs=1): err= 0: pid=3719335: Fri Dec 6 11:34:07 2024 00:35:01.651 read: IOPS=6063, BW=23.7MiB/s (24.8MB/s)(23.8MiB/1006msec) 00:35:01.651 slat (nsec): min=897, max=29268k, avg=75094.75, stdev=802035.49 00:35:01.651 clat (usec): min=2435, max=57695, avg=10732.13, stdev=8178.09 00:35:01.651 lat (usec): min=2442, max=57720, avg=10807.22, stdev=8243.30 00:35:01.651 clat percentiles (usec): 00:35:01.651 | 1.00th=[ 2868], 5.00th=[ 4948], 10.00th=[ 5735], 20.00th=[ 6456], 00:35:01.651 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8225], 00:35:01.651 | 70.00th=[ 9503], 80.00th=[12518], 90.00th=[24249], 95.00th=[28443], 00:35:01.651 | 99.00th=[44303], 99.50th=[44303], 99.90th=[50070], 99.95th=[50070], 00:35:01.651 | 99.99th=[57934] 00:35:01.651 write: IOPS=6673, BW=26.1MiB/s (27.3MB/s)(26.2MiB/1006msec); 0 zone resets 00:35:01.651 slat (nsec): min=1575, max=21160k, avg=65446.09, stdev=534309.53 00:35:01.651 clat (usec): min=1186, max=64730, avg=9270.05, stdev=6319.23 00:35:01.651 lat (usec): min=1201, max=64732, avg=9335.50, stdev=6355.82 00:35:01.651 clat percentiles (usec): 00:35:01.651 | 1.00th=[ 2343], 5.00th=[ 3982], 10.00th=[ 4359], 20.00th=[ 5080], 00:35:01.651 | 30.00th=[ 5800], 40.00th=[ 6652], 50.00th=[ 7046], 60.00th=[ 8094], 00:35:01.651 | 70.00th=[ 9765], 80.00th=[11863], 90.00th=[16188], 95.00th=[23987], 00:35:01.651 | 99.00th=[30016], 99.50th=[32113], 99.90th=[63177], 99.95th=[63177], 00:35:01.651 | 99.99th=[64750] 00:35:01.651 bw ( KiB/s): min=24080, max=28672, per=30.33%, avg=26376.00, stdev=3247.03, samples=2 00:35:01.651 iops : min= 6020, max= 7168, avg=6594.00, stdev=811.76, samples=2 00:35:01.651 lat (msec) : 2=0.26%, 4=3.87%, 10=67.28%, 20=18.70%, 50=9.68% 00:35:01.651 lat (msec) : 100=0.22% 00:35:01.651 cpu : usr=4.38%, sys=7.16%, ctx=407, majf=0, minf=1 00:35:01.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:35:01.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:01.651 issued rwts: total=6100,6714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:01.651 job1: (groupid=0, jobs=1): err= 0: pid=3719336: Fri Dec 6 11:34:07 2024 00:35:01.651 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:35:01.651 slat (nsec): min=1486, max=24273k, avg=151390.39, stdev=1091974.34 00:35:01.651 clat (usec): min=7682, max=60280, avg=19014.10, stdev=11409.16 00:35:01.651 lat (usec): min=7718, max=60306, avg=19165.49, stdev=11513.44 00:35:01.651 clat percentiles (usec): 00:35:01.651 | 1.00th=[ 7832], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11338], 00:35:01.651 | 30.00th=[12125], 40.00th=[13435], 50.00th=[13829], 60.00th=[14091], 00:35:01.651 | 70.00th=[16909], 80.00th=[28967], 90.00th=[37487], 95.00th=[45876], 00:35:01.651 | 99.00th=[48497], 99.50th=[52691], 99.90th=[56886], 99.95th=[57410], 00:35:01.651 | 99.99th=[60031] 00:35:01.651 write: IOPS=2995, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1009msec); 0 zone resets 00:35:01.651 slat (nsec): min=1573, max=15960k, avg=157704.59, stdev=935392.30 00:35:01.651 clat (usec): min=1202, max=97954, avg=26272.24, stdev=22783.15 00:35:01.651 lat (usec): min=1214, max=97961, avg=26429.95, stdev=22864.54 00:35:01.651 clat percentiles (usec): 00:35:01.651 | 1.00th=[ 2671], 5.00th=[ 6980], 10.00th=[ 7963], 20.00th=[11076], 00:35:01.651 | 30.00th=[11207], 40.00th=[12387], 50.00th=[14353], 60.00th=[17171], 00:35:01.651 | 70.00th=[25297], 80.00th=[45876], 90.00th=[67634], 95.00th=[76022], 00:35:01.651 | 99.00th=[87557], 99.50th=[91751], 99.90th=[91751], 99.95th=[91751], 00:35:01.651 | 99.99th=[98042] 00:35:01.651 bw ( KiB/s): min=10248, max=12904, per=13.31%, avg=11576.00, stdev=1878.08, samples=2 00:35:01.651 iops : min= 2562, max= 3226, avg=2894.00, stdev=469.52, samples=2 00:35:01.651 lat (msec) : 2=0.39%, 4=0.61%, 10=9.55%, 20=56.27%, 50=23.16% 00:35:01.651 lat (msec) : 100=10.01% 00:35:01.651 cpu : usr=2.58%, sys=2.98%, ctx=226, majf=0, minf=1 00:35:01.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:35:01.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:01.651 issued rwts: total=2560,3022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:01.651 job2: (groupid=0, jobs=1): err= 0: pid=3719337: Fri Dec 6 11:34:07 2024 00:35:01.651 read: IOPS=7629, BW=29.8MiB/s (31.2MB/s)(31.1MiB/1042msec) 00:35:01.651 slat (nsec): min=927, max=7572.4k, avg=53704.55, stdev=457089.06 00:35:01.651 clat (usec): min=1738, max=53258, avg=8742.41, stdev=7046.43 00:35:01.651 lat (usec): min=1747, max=55924, avg=8796.11, stdev=7057.77 00:35:01.651 clat percentiles (usec): 00:35:01.651 | 1.00th=[ 2540], 5.00th=[ 3621], 10.00th=[ 4555], 20.00th=[ 5866], 00:35:01.651 | 30.00th=[ 6456], 40.00th=[ 6915], 50.00th=[ 7701], 60.00th=[ 8094], 00:35:01.651 | 70.00th=[ 8717], 80.00th=[ 9634], 90.00th=[11863], 95.00th=[13698], 00:35:01.651 | 99.00th=[50070], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:35:01.651 | 99.99th=[53216] 00:35:01.651 write: IOPS=8353, BW=32.6MiB/s (34.2MB/s)(34.0MiB/1042msec); 0 zone resets 00:35:01.651 slat (nsec): min=1651, max=21568k, avg=51113.52, stdev=486476.18 00:35:01.651 clat (usec): min=690, max=29552, avg=7213.94, stdev=3099.74 00:35:01.651 lat (usec): min=698, max=33816, avg=7265.05, stdev=3129.54 00:35:01.651 clat percentiles (usec): 00:35:01.651 | 1.00th=[ 2024], 5.00th=[ 3654], 10.00th=[ 3916], 20.00th=[ 5014], 00:35:01.651 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7504], 00:35:01.652 | 70.00th=[ 7963], 80.00th=[ 8848], 90.00th=[10421], 95.00th=[12125], 00:35:01.652 | 99.00th=[23200], 99.50th=[23725], 99.90th=[23987], 99.95th=[23987], 00:35:01.652 | 99.99th=[29492] 00:35:01.652 bw ( KiB/s): min=30616, max=39016, per=40.03%, avg=34816.00, stdev=5939.70, samples=2 00:35:01.652 iops : min= 7654, max= 9754, avg=8704.00, stdev=1484.92, samples=2 00:35:01.652 lat (usec) : 750=0.04%, 1000=0.02% 00:35:01.652 lat (msec) : 2=0.66%, 4=8.62%, 10=75.47%, 20=13.29%, 50=1.50% 00:35:01.652 lat (msec) : 100=0.40% 00:35:01.652 cpu : usr=5.76%, sys=8.55%, ctx=371, majf=0, minf=1 00:35:01.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:01.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:01.652 issued rwts: total=7950,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:01.652 job3: (groupid=0, jobs=1): err= 0: pid=3719338: Fri Dec 6 11:34:07 2024 00:35:01.652 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:35:01.652 slat (nsec): min=982, max=13186k, avg=111757.38, stdev=806611.64 00:35:01.652 clat (usec): min=4510, max=86902, avg=13752.00, stdev=9629.36 00:35:01.652 lat (usec): min=4914, max=86909, avg=13863.76, stdev=9716.40 00:35:01.652 clat percentiles (usec): 00:35:01.652 | 1.00th=[ 5604], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8586], 00:35:01.652 | 30.00th=[ 9110], 40.00th=[10028], 50.00th=[11731], 60.00th=[13698], 00:35:01.652 | 70.00th=[14353], 80.00th=[16319], 90.00th=[19268], 95.00th=[22676], 00:35:01.652 | 99.00th=[73925], 99.50th=[81265], 99.90th=[86508], 99.95th=[86508], 00:35:01.652 | 99.99th=[86508] 00:35:01.652 write: IOPS=4189, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1006msec); 0 zone resets 00:35:01.652 slat (nsec): min=1686, max=12607k, avg=116897.22, stdev=722653.74 00:35:01.652 clat (usec): min=734, max=86888, avg=16810.22, stdev=16034.57 00:35:01.652 lat (usec): min=769, max=86895, avg=16927.11, stdev=16140.83 00:35:01.652 clat percentiles (usec): 00:35:01.652 | 1.00th=[ 3392], 5.00th=[ 3687], 10.00th=[ 5276], 20.00th=[ 7439], 00:35:01.652 | 30.00th=[ 8160], 40.00th=[ 8848], 50.00th=[10028], 60.00th=[12125], 00:35:01.652 | 70.00th=[13435], 80.00th=[25297], 90.00th=[42206], 95.00th=[54264], 00:35:01.652 | 99.00th=[70779], 99.50th=[74974], 99.90th=[78119], 99.95th=[78119], 00:35:01.652 | 99.99th=[86508] 00:35:01.652 bw ( KiB/s): min= 8192, max=24624, per=18.87%, avg=16408.00, stdev=11619.18, samples=2 00:35:01.652 iops : min= 2048, max= 6156, avg=4102.00, stdev=2904.79, samples=2 00:35:01.652 lat (usec) : 750=0.02%, 1000=0.08% 00:35:01.652 lat (msec) : 2=0.20%, 4=3.04%, 10=41.57%, 20=38.88%, 50=12.22% 00:35:01.652 lat (msec) : 100=3.97% 00:35:01.652 cpu : usr=3.08%, sys=4.38%, ctx=335, majf=0, minf=1 00:35:01.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:01.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:01.652 issued rwts: total=4096,4215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.652 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:01.652 00:35:01.652 Run status group 0 (all jobs): 00:35:01.652 READ: bw=77.6MiB/s (81.4MB/s), 9.91MiB/s-29.8MiB/s (10.4MB/s-31.2MB/s), io=80.9MiB (84.8MB), run=1006-1042msec 00:35:01.652 WRITE: bw=84.9MiB/s (89.1MB/s), 11.7MiB/s-32.6MiB/s (12.3MB/s-34.2MB/s), io=88.5MiB (92.8MB), run=1006-1042msec 00:35:01.652 00:35:01.652 Disk stats (read/write): 00:35:01.652 nvme0n1: ios=5170/5378, merge=0/0, ticks=43630/35331, in_queue=78961, util=87.58% 00:35:01.652 nvme0n2: ios=2092/2537, merge=0/0, ticks=27032/53377, in_queue=80409, util=96.53% 00:35:01.652 nvme0n3: ios=6474/7168, merge=0/0, ticks=51634/49006, in_queue=100640, util=88.37% 00:35:01.652 nvme0n4: ios=3641/3623, merge=0/0, ticks=48387/53317, in_queue=101704, util=96.68% 00:35:01.652 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:01.652 [global] 00:35:01.652 thread=1 00:35:01.652 invalidate=1 00:35:01.652 rw=randwrite 00:35:01.652 time_based=1 00:35:01.652 runtime=1 00:35:01.652 ioengine=libaio 00:35:01.652 direct=1 00:35:01.652 bs=4096 00:35:01.652 iodepth=128 00:35:01.652 norandommap=0 00:35:01.652 numjobs=1 00:35:01.652 00:35:01.652 verify_dump=1 00:35:01.652 verify_backlog=512 00:35:01.652 verify_state_save=0 00:35:01.652 do_verify=1 00:35:01.652 verify=crc32c-intel 00:35:01.652 [job0] 00:35:01.652 filename=/dev/nvme0n1 00:35:01.652 [job1] 00:35:01.652 filename=/dev/nvme0n2 00:35:01.652 [job2] 00:35:01.652 filename=/dev/nvme0n3 00:35:01.652 [job3] 00:35:01.652 filename=/dev/nvme0n4 00:35:01.652 Could not set queue depth (nvme0n1) 00:35:01.652 Could not set queue depth (nvme0n2) 00:35:01.652 Could not set queue depth (nvme0n3) 00:35:01.652 Could not set queue depth (nvme0n4) 00:35:01.914 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:01.914 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:01.914 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:01.914 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:01.914 fio-3.35 00:35:01.914 Starting 4 threads 00:35:03.319 00:35:03.319 job0: (groupid=0, jobs=1): err= 0: pid=3719865: Fri Dec 6 11:34:09 2024 00:35:03.319 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:35:03.319 slat (nsec): min=920, max=16977k, avg=101499.70, stdev=883574.74 00:35:03.319 clat (usec): min=2851, max=70165, avg=13199.01, stdev=11028.15 00:35:03.319 lat (usec): min=2857, max=74653, avg=13300.51, stdev=11117.04 00:35:03.319 clat percentiles (usec): 00:35:03.319 | 1.00th=[ 4113], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6128], 00:35:03.319 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 8225], 60.00th=[ 8586], 00:35:03.319 | 70.00th=[10159], 80.00th=[22676], 90.00th=[32113], 95.00th=[35914], 00:35:03.319 | 99.00th=[49021], 99.50th=[54264], 99.90th=[69731], 99.95th=[69731], 00:35:03.319 | 99.99th=[69731] 00:35:03.319 write: IOPS=4858, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1006msec); 0 zone resets 00:35:03.319 slat (nsec): min=1553, max=31362k, avg=103156.55, stdev=945417.15 00:35:03.319 clat (usec): min=696, max=64448, avg=13630.65, stdev=11073.79 00:35:03.319 lat (usec): min=1338, max=64456, avg=13733.80, stdev=11150.17 00:35:03.319 clat percentiles (usec): 00:35:03.319 | 1.00th=[ 2900], 5.00th=[ 4359], 10.00th=[ 5211], 20.00th=[ 6587], 00:35:03.319 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[10028], 00:35:03.319 | 70.00th=[15664], 80.00th=[20841], 90.00th=[26084], 95.00th=[35914], 00:35:03.319 | 99.00th=[58459], 99.50th=[61604], 99.90th=[64226], 99.95th=[64226], 00:35:03.319 | 99.99th=[64226] 00:35:03.319 bw ( KiB/s): min=17008, max=21072, per=24.29%, avg=19040.00, stdev=2873.68, samples=2 00:35:03.319 iops : min= 4252, max= 5268, avg=4760.00, stdev=718.42, samples=2 00:35:03.319 lat (usec) : 750=0.01% 00:35:03.319 lat (msec) : 2=0.09%, 4=1.03%, 10=62.37%, 20=14.95%, 50=20.32% 00:35:03.319 lat (msec) : 100=1.21% 00:35:03.319 cpu : usr=3.38%, sys=4.98%, ctx=280, majf=0, minf=2 00:35:03.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:03.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:03.319 issued rwts: total=4608,4888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:03.319 job1: (groupid=0, jobs=1): err= 0: pid=3719866: Fri Dec 6 11:34:09 2024 00:35:03.319 read: IOPS=4197, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1006msec) 00:35:03.319 slat (nsec): min=983, max=14941k, avg=100302.52, stdev=788186.44 00:35:03.319 clat (usec): min=3640, max=52640, avg=13031.74, stdev=10775.12 00:35:03.319 lat (usec): min=3798, max=52644, avg=13132.05, stdev=10864.22 00:35:03.319 clat percentiles (usec): 00:35:03.319 | 1.00th=[ 4752], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 6194], 00:35:03.319 | 30.00th=[ 6390], 40.00th=[ 6718], 50.00th=[ 7963], 60.00th=[ 8455], 00:35:03.319 | 70.00th=[10421], 80.00th=[22938], 90.00th=[31327], 95.00th=[38536], 00:35:03.319 | 99.00th=[44827], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:35:03.319 | 99.99th=[52691] 00:35:03.319 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:35:03.319 slat (nsec): min=1603, max=14567k, avg=107387.30, stdev=702360.68 00:35:03.319 clat (msec): min=2, max=106, avg=15.76, stdev=17.98 00:35:03.319 lat (msec): min=2, max=106, avg=15.87, stdev=18.09 00:35:03.319 clat percentiles (msec): 00:35:03.319 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], 00:35:03.319 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 11], 00:35:03.319 | 70.00th=[ 12], 80.00th=[ 17], 90.00th=[ 50], 95.00th=[ 63], 00:35:03.319 | 99.00th=[ 81], 99.50th=[ 92], 99.90th=[ 100], 99.95th=[ 100], 00:35:03.319 | 99.99th=[ 107] 00:35:03.319 bw ( KiB/s): min=12288, max=24576, per=23.51%, avg=18432.00, stdev=8688.93, samples=2 00:35:03.319 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:35:03.319 lat (msec) : 4=3.17%, 10=57.80%, 20=18.79%, 50=15.03%, 100=5.21% 00:35:03.319 lat (msec) : 250=0.01% 00:35:03.319 cpu : usr=3.68%, sys=5.37%, ctx=271, majf=0, minf=1 00:35:03.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:03.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:03.319 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:03.319 job2: (groupid=0, jobs=1): err= 0: pid=3719867: Fri Dec 6 11:34:09 2024 00:35:03.319 read: IOPS=5389, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1007msec) 00:35:03.319 slat (nsec): min=1011, max=14999k, avg=82799.70, stdev=601638.62 00:35:03.319 clat (usec): min=3808, max=58519, avg=10064.93, stdev=5982.60 00:35:03.319 lat (usec): min=3813, max=58526, avg=10147.73, stdev=6042.39 00:35:03.319 clat percentiles (usec): 00:35:03.319 | 1.00th=[ 4948], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7046], 00:35:03.319 | 30.00th=[ 7504], 40.00th=[ 7963], 50.00th=[ 8848], 60.00th=[ 9634], 00:35:03.319 | 70.00th=[10028], 80.00th=[10552], 90.00th=[13304], 95.00th=[17433], 00:35:03.319 | 99.00th=[41681], 99.50th=[49021], 99.90th=[55313], 99.95th=[58459], 00:35:03.319 | 99.99th=[58459] 00:35:03.319 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:35:03.319 slat (nsec): min=1590, max=10420k, avg=92009.57, stdev=560576.56 00:35:03.319 clat (usec): min=1139, max=58523, avg=12970.15, stdev=12519.28 00:35:03.319 lat (usec): min=1150, max=58536, avg=13062.16, stdev=12600.61 00:35:03.319 clat percentiles (usec): 00:35:03.319 | 1.00th=[ 3818], 5.00th=[ 4228], 10.00th=[ 5538], 20.00th=[ 6390], 00:35:03.319 | 30.00th=[ 6980], 40.00th=[ 7373], 50.00th=[ 8094], 60.00th=[ 9110], 00:35:03.319 | 70.00th=[10814], 80.00th=[14222], 90.00th=[31065], 95.00th=[49021], 00:35:03.319 | 99.00th=[56886], 99.50th=[57410], 99.90th=[58459], 99.95th=[58459], 00:35:03.319 | 99.99th=[58459] 00:35:03.319 bw ( KiB/s): min=16384, max=28672, per=28.74%, avg=22528.00, stdev=8688.93, samples=2 00:35:03.319 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:35:03.319 lat (msec) : 2=0.08%, 4=0.84%, 10=66.73%, 20=22.57%, 50=7.27% 00:35:03.319 lat (msec) : 100=2.50% 00:35:03.320 cpu : usr=3.88%, sys=6.66%, ctx=345, majf=0, minf=1 00:35:03.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:03.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:03.320 issued rwts: total=5427,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:03.320 job3: (groupid=0, jobs=1): err= 0: pid=3719868: Fri Dec 6 11:34:09 2024 00:35:03.320 read: IOPS=4090, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1006msec) 00:35:03.320 slat (nsec): min=997, max=14427k, avg=109414.25, stdev=938126.30 00:35:03.320 clat (usec): min=3409, max=47252, avg=14685.48, stdev=5883.71 00:35:03.320 lat (usec): min=4544, max=49619, avg=14794.89, stdev=5963.65 00:35:03.320 clat percentiles (usec): 00:35:03.320 | 1.00th=[ 7832], 5.00th=[10159], 10.00th=[10421], 20.00th=[11338], 00:35:03.320 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13173], 60.00th=[13698], 00:35:03.320 | 70.00th=[14222], 80.00th=[16319], 90.00th=[21103], 95.00th=[25560], 00:35:03.320 | 99.00th=[38536], 99.50th=[42730], 99.90th=[47449], 99.95th=[47449], 00:35:03.320 | 99.99th=[47449] 00:35:03.320 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:35:03.320 slat (nsec): min=1617, max=11955k, avg=109661.49, stdev=688692.32 00:35:03.320 clat (usec): min=1134, max=65365, avg=14588.22, stdev=10924.77 00:35:03.320 lat (usec): min=1176, max=65372, avg=14697.88, stdev=11002.77 00:35:03.320 clat percentiles (usec): 00:35:03.320 | 1.00th=[ 3097], 5.00th=[ 6325], 10.00th=[ 7635], 20.00th=[ 8979], 00:35:03.320 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[11076], 60.00th=[11863], 00:35:03.320 | 70.00th=[14091], 80.00th=[16581], 90.00th=[23462], 95.00th=[40109], 00:35:03.320 | 99.00th=[62653], 99.50th=[63177], 99.90th=[65274], 99.95th=[65274], 00:35:03.320 | 99.99th=[65274] 00:35:03.320 bw ( KiB/s): min=15528, max=20464, per=22.96%, avg=17996.00, stdev=3490.28, samples=2 00:35:03.320 iops : min= 3882, max= 5116, avg=4499.00, stdev=872.57, samples=2 00:35:03.320 lat (msec) : 2=0.01%, 4=0.96%, 10=20.52%, 20=66.30%, 50=10.58% 00:35:03.320 lat (msec) : 100=1.63% 00:35:03.320 cpu : usr=2.99%, sys=4.48%, ctx=316, majf=0, minf=1 00:35:03.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:03.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:03.320 issued rwts: total=4115,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:03.320 00:35:03.320 Run status group 0 (all jobs): 00:35:03.320 READ: bw=71.3MiB/s (74.7MB/s), 16.0MiB/s-21.1MiB/s (16.8MB/s-22.1MB/s), io=71.8MiB (75.3MB), run=1006-1007msec 00:35:03.320 WRITE: bw=76.6MiB/s (80.3MB/s), 17.9MiB/s-21.8MiB/s (18.8MB/s-22.9MB/s), io=77.1MiB (80.8MB), run=1006-1007msec 00:35:03.320 00:35:03.320 Disk stats (read/write): 00:35:03.320 nvme0n1: ios=4252/4608, merge=0/0, ticks=25126/30188, in_queue=55314, util=91.88% 00:35:03.320 nvme0n2: ios=2598/3072, merge=0/0, ticks=22449/52297, in_queue=74746, util=92.25% 00:35:03.320 nvme0n3: ios=5165/5294, merge=0/0, ticks=44836/53143, in_queue=97979, util=92.40% 00:35:03.320 nvme0n4: ios=3297/3584, merge=0/0, ticks=42655/45256, in_queue=87911, util=89.52% 00:35:03.320 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:03.320 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3720196 00:35:03.320 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:03.320 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:03.320 [global] 00:35:03.320 thread=1 00:35:03.320 invalidate=1 00:35:03.320 rw=read 00:35:03.320 time_based=1 00:35:03.320 runtime=10 00:35:03.320 ioengine=libaio 00:35:03.320 direct=1 00:35:03.320 bs=4096 00:35:03.320 iodepth=1 00:35:03.320 norandommap=1 00:35:03.320 numjobs=1 00:35:03.320 00:35:03.320 [job0] 00:35:03.320 filename=/dev/nvme0n1 00:35:03.320 [job1] 00:35:03.320 filename=/dev/nvme0n2 00:35:03.320 [job2] 00:35:03.320 filename=/dev/nvme0n3 00:35:03.320 [job3] 00:35:03.320 filename=/dev/nvme0n4 00:35:03.320 Could not set queue depth (nvme0n1) 00:35:03.320 Could not set queue depth (nvme0n2) 00:35:03.320 Could not set queue depth (nvme0n3) 00:35:03.320 Could not set queue depth (nvme0n4) 00:35:03.594 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:03.594 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:03.594 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:03.594 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:03.594 fio-3.35 00:35:03.594 Starting 4 threads 00:35:06.151 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:06.151 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=12763136, buflen=4096 00:35:06.151 fio: pid=3720389, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:06.411 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:06.411 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:06.411 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:06.411 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10952704, buflen=4096 00:35:06.411 fio: pid=3720388, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:06.672 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=12222464, buflen=4096 00:35:06.672 fio: pid=3720386, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:06.672 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:06.672 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:06.932 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3264512, buflen=4096 00:35:06.932 fio: pid=3720387, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:06.932 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:06.932 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:06.932 00:35:06.932 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3720386: Fri Dec 6 11:34:12 2024 00:35:06.932 read: IOPS=1008, BW=4031KiB/s (4128kB/s)(11.7MiB/2961msec) 00:35:06.932 slat (usec): min=6, max=29551, avg=46.63, stdev=671.23 00:35:06.932 clat (usec): min=445, max=3502, avg=932.02, stdev=156.71 00:35:06.932 lat (usec): min=470, max=30442, avg=978.65, stdev=689.69 00:35:06.932 clat percentiles (usec): 00:35:06.932 | 1.00th=[ 523], 5.00th=[ 660], 10.00th=[ 725], 20.00th=[ 799], 00:35:06.932 | 30.00th=[ 873], 40.00th=[ 922], 50.00th=[ 955], 60.00th=[ 988], 00:35:06.932 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1139], 00:35:06.932 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1795], 99.95th=[ 2008], 00:35:06.932 | 99.99th=[ 3490] 00:35:06.932 bw ( KiB/s): min= 3968, max= 4360, per=34.40%, avg=4180.80, stdev=160.66, samples=5 00:35:06.932 iops : min= 992, max= 1090, avg=1045.20, stdev=40.16, samples=5 00:35:06.932 lat (usec) : 500=0.50%, 750=12.19%, 1000=51.83% 00:35:06.932 lat (msec) : 2=35.38%, 4=0.07% 00:35:06.932 cpu : usr=1.28%, sys=2.80%, ctx=2990, majf=0, minf=1 00:35:06.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.932 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.932 issued rwts: total=2985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:06.932 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3720387: Fri Dec 6 11:34:12 2024 00:35:06.932 read: IOPS=253, BW=1012KiB/s (1036kB/s)(3188KiB/3151msec) 00:35:06.932 slat (usec): min=6, max=252, avg=25.82, stdev= 9.27 00:35:06.932 clat (usec): min=313, max=42082, avg=3892.78, stdev=10412.45 00:35:06.932 lat (usec): min=338, max=42108, avg=3918.60, stdev=10413.97 00:35:06.932 clat percentiles (usec): 00:35:06.932 | 1.00th=[ 529], 5.00th=[ 685], 10.00th=[ 766], 20.00th=[ 865], 00:35:06.932 | 30.00th=[ 955], 40.00th=[ 1004], 50.00th=[ 1037], 60.00th=[ 1090], 00:35:06.932 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1303], 95.00th=[41157], 00:35:06.932 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:06.932 | 99.99th=[42206] 00:35:06.932 bw ( KiB/s): min= 96, max= 3816, per=8.71%, avg=1058.67, stdev=1447.96, samples=6 00:35:06.932 iops : min= 24, max= 954, avg=264.67, stdev=361.99, samples=6 00:35:06.932 lat (usec) : 500=0.75%, 750=6.89%, 1000=31.95% 00:35:06.932 lat (msec) : 2=53.01%, 4=0.13%, 50=7.14% 00:35:06.932 cpu : usr=0.16%, sys=0.86%, ctx=800, majf=0, minf=2 00:35:06.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.932 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.932 issued rwts: total=798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:06.932 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3720388: Fri Dec 6 11:34:12 2024 00:35:06.932 read: IOPS=957, BW=3828KiB/s (3920kB/s)(10.4MiB/2794msec) 00:35:06.932 slat (usec): min=6, max=19522, avg=37.72, stdev=431.75 00:35:06.932 clat (usec): min=271, max=2386, avg=990.50, stdev=156.90 00:35:06.932 lat (usec): min=297, max=20604, avg=1028.22, stdev=462.42 00:35:06.932 clat percentiles (usec): 00:35:06.932 | 1.00th=[ 502], 5.00th=[ 676], 10.00th=[ 783], 20.00th=[ 898], 00:35:06.932 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 1012], 60.00th=[ 1037], 00:35:06.932 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1205], 00:35:06.932 | 99.00th=[ 1287], 99.50th=[ 1319], 99.90th=[ 1467], 99.95th=[ 1500], 00:35:06.932 | 99.99th=[ 2376] 00:35:06.932 bw ( KiB/s): min= 3744, max= 4192, per=32.31%, avg=3926.40, stdev=175.13, samples=5 00:35:06.932 iops : min= 936, max= 1048, avg=981.60, stdev=43.78, samples=5 00:35:06.932 lat (usec) : 500=0.97%, 750=7.10%, 1000=38.09% 00:35:06.932 lat (msec) : 2=53.76%, 4=0.04% 00:35:06.932 cpu : usr=1.07%, sys=2.90%, ctx=2678, majf=0, minf=2 00:35:06.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.932 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.932 issued rwts: total=2675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:06.932 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3720389: Fri Dec 6 11:34:12 2024 00:35:06.932 read: IOPS=1204, BW=4816KiB/s (4932kB/s)(12.2MiB/2588msec) 00:35:06.932 slat (nsec): min=6690, max=59780, avg=24125.15, stdev=5863.85 00:35:06.932 clat (usec): min=345, max=1824, avg=797.48, stdev=134.04 00:35:06.932 lat (usec): min=371, max=1852, avg=821.60, stdev=134.99 00:35:06.932 clat percentiles (usec): 00:35:06.932 | 1.00th=[ 519], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 693], 00:35:06.932 | 30.00th=[ 725], 40.00th=[ 758], 50.00th=[ 799], 60.00th=[ 840], 00:35:06.932 | 70.00th=[ 865], 80.00th=[ 898], 90.00th=[ 963], 95.00th=[ 1020], 00:35:06.932 | 99.00th=[ 1139], 99.50th=[ 1205], 99.90th=[ 1287], 99.95th=[ 1729], 00:35:06.932 | 99.99th=[ 1827] 00:35:06.933 bw ( KiB/s): min= 4256, max= 5048, per=39.77%, avg=4832.00, stdev=327.80, samples=5 00:35:06.933 iops : min= 1064, max= 1262, avg=1208.00, stdev=81.95, samples=5 00:35:06.933 lat (usec) : 500=0.71%, 750=37.25%, 1000=55.73% 00:35:06.933 lat (msec) : 2=6.29% 00:35:06.933 cpu : usr=1.28%, sys=3.36%, ctx=3118, majf=0, minf=2 00:35:06.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.933 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.933 issued rwts: total=3117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:06.933 00:35:06.933 Run status group 0 (all jobs): 00:35:06.933 READ: bw=11.9MiB/s (12.4MB/s), 1012KiB/s-4816KiB/s (1036kB/s-4932kB/s), io=37.4MiB (39.2MB), run=2588-3151msec 00:35:06.933 00:35:06.933 Disk stats (read/write): 00:35:06.933 nvme0n1: ios=2882/0, merge=0/0, ticks=2611/0, in_queue=2611, util=92.59% 00:35:06.933 nvme0n2: ios=796/0, merge=0/0, ticks=3044/0, in_queue=3044, util=95.66% 00:35:06.933 nvme0n3: ios=2529/0, merge=0/0, ticks=2407/0, in_queue=2407, util=96.03% 00:35:06.933 nvme0n4: ios=3116/0, merge=0/0, ticks=2424/0, in_queue=2424, util=96.46% 00:35:06.933 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:06.933 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:07.192 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:07.192 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:07.452 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:07.452 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:07.452 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:07.452 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:07.713 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:07.713 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3720196 00:35:07.713 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:07.713 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:07.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:07.713 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:07.713 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:07.713 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:07.713 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:07.713 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:07.713 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:07.974 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:07.974 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:07.974 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:07.974 nvmf hotplug test: fio failed as expected 00:35:07.974 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:07.974 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:07.974 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:07.974 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:07.974 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:07.974 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:07.974 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:07.974 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:07.974 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:07.974 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:07.974 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:07.974 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:07.974 rmmod nvme_tcp 00:35:07.974 rmmod nvme_fabrics 00:35:07.974 rmmod nvme_keyring 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3716947 ']' 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3716947 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3716947 ']' 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3716947 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3716947 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3716947' 00:35:08.234 killing process with pid 3716947 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3716947 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3716947 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:08.234 11:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:10.784 00:35:10.784 real 0m28.651s 00:35:10.784 user 2m7.999s 00:35:10.784 sys 0m13.012s 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:10.784 ************************************ 00:35:10.784 END TEST nvmf_fio_target 00:35:10.784 ************************************ 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:10.784 ************************************ 00:35:10.784 START TEST nvmf_bdevio 00:35:10.784 ************************************ 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:10.784 * Looking for test storage... 00:35:10.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.784 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:10.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.785 --rc genhtml_branch_coverage=1 00:35:10.785 --rc genhtml_function_coverage=1 00:35:10.785 --rc genhtml_legend=1 00:35:10.785 --rc geninfo_all_blocks=1 00:35:10.785 --rc geninfo_unexecuted_blocks=1 00:35:10.785 00:35:10.785 ' 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:10.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.785 --rc genhtml_branch_coverage=1 00:35:10.785 --rc genhtml_function_coverage=1 00:35:10.785 --rc genhtml_legend=1 00:35:10.785 --rc geninfo_all_blocks=1 00:35:10.785 --rc geninfo_unexecuted_blocks=1 00:35:10.785 00:35:10.785 ' 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:10.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.785 --rc genhtml_branch_coverage=1 00:35:10.785 --rc genhtml_function_coverage=1 00:35:10.785 --rc genhtml_legend=1 00:35:10.785 --rc geninfo_all_blocks=1 00:35:10.785 --rc geninfo_unexecuted_blocks=1 00:35:10.785 00:35:10.785 ' 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:10.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.785 --rc genhtml_branch_coverage=1 00:35:10.785 --rc genhtml_function_coverage=1 00:35:10.785 --rc genhtml_legend=1 00:35:10.785 --rc geninfo_all_blocks=1 00:35:10.785 --rc geninfo_unexecuted_blocks=1 00:35:10.785 00:35:10.785 ' 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:10.785 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:10.786 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:19.113 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.113 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.114 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:19.115 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:19.115 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:19.115 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:19.116 Found net devices under 0000:31:00.0: cvl_0_0 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.116 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.117 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.117 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.117 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.117 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:19.117 Found net devices under 0000:31:00.1: cvl_0_1 00:35:19.117 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.117 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:19.120 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:19.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:35:19.121 00:35:19.121 --- 10.0.0.2 ping statistics --- 00:35:19.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.121 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:19.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:35:19.121 00:35:19.121 --- 10.0.0.1 ping statistics --- 00:35:19.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.121 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3725878 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3725878 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3725878 ']' 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.121 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:19.121 [2024-12-06 11:34:24.990077] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:19.121 [2024-12-06 11:34:24.991062] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:35:19.121 [2024-12-06 11:34:24.991099] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.121 [2024-12-06 11:34:25.091677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:19.121 [2024-12-06 11:34:25.127389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.121 [2024-12-06 11:34:25.127423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.121 [2024-12-06 11:34:25.127432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.121 [2024-12-06 11:34:25.127439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.121 [2024-12-06 11:34:25.127444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.121 [2024-12-06 11:34:25.128955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:19.121 [2024-12-06 11:34:25.129258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:19.121 [2024-12-06 11:34:25.129372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:19.121 [2024-12-06 11:34:25.129372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:19.121 [2024-12-06 11:34:25.185554] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:19.121 [2024-12-06 11:34:25.186880] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:19.121 [2024-12-06 11:34:25.186960] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:19.121 [2024-12-06 11:34:25.187803] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:19.121 [2024-12-06 11:34:25.187848] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:19.698 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.698 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:19.698 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:19.698 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:19.698 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:19.698 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.698 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:19.698 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.698 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:19.698 [2024-12-06 11:34:25.850245] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:19.959 Malloc0 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:19.959 [2024-12-06 11:34:25.938476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:19.959 { 00:35:19.959 "params": { 00:35:19.959 "name": "Nvme$subsystem", 00:35:19.959 "trtype": "$TEST_TRANSPORT", 00:35:19.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:19.959 "adrfam": "ipv4", 00:35:19.959 "trsvcid": "$NVMF_PORT", 00:35:19.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:19.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:19.959 "hdgst": ${hdgst:-false}, 00:35:19.959 "ddgst": ${ddgst:-false} 00:35:19.959 }, 00:35:19.959 "method": "bdev_nvme_attach_controller" 00:35:19.959 } 00:35:19.959 EOF 00:35:19.959 )") 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:19.959 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:19.959 "params": { 00:35:19.959 "name": "Nvme1", 00:35:19.959 "trtype": "tcp", 00:35:19.959 "traddr": "10.0.0.2", 00:35:19.959 "adrfam": "ipv4", 00:35:19.959 "trsvcid": "4420", 00:35:19.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:19.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:19.959 "hdgst": false, 00:35:19.959 "ddgst": false 00:35:19.959 }, 00:35:19.959 "method": "bdev_nvme_attach_controller" 00:35:19.959 }' 00:35:19.959 [2024-12-06 11:34:25.995095] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:35:19.960 [2024-12-06 11:34:25.995167] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726127 ] 00:35:19.960 [2024-12-06 11:34:26.081079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:20.221 [2024-12-06 11:34:26.125237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.221 [2024-12-06 11:34:26.125360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:20.221 [2024-12-06 11:34:26.125364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.482 I/O targets: 00:35:20.482 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:20.482 00:35:20.482 00:35:20.482 CUnit - A unit testing framework for C - Version 2.1-3 00:35:20.482 http://cunit.sourceforge.net/ 00:35:20.482 00:35:20.482 00:35:20.482 Suite: bdevio tests on: Nvme1n1 00:35:20.482 Test: blockdev write read block ...passed 00:35:20.482 Test: blockdev write zeroes read block ...passed 00:35:20.482 Test: blockdev write zeroes read no split ...passed 00:35:20.482 Test: blockdev write zeroes read split ...passed 00:35:20.482 Test: blockdev write zeroes read split partial ...passed 00:35:20.482 Test: blockdev reset ...[2024-12-06 11:34:26.632272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:20.482 [2024-12-06 11:34:26.632336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24870e0 (9): Bad file descriptor 00:35:20.482 [2024-12-06 11:34:26.638150] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:20.482 passed 00:35:20.744 Test: blockdev write read 8 blocks ...passed 00:35:20.744 Test: blockdev write read size > 128k ...passed 00:35:20.744 Test: blockdev write read invalid size ...passed 00:35:20.744 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:20.744 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:20.744 Test: blockdev write read max offset ...passed 00:35:20.744 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:20.744 Test: blockdev writev readv 8 blocks ...passed 00:35:20.744 Test: blockdev writev readv 30 x 1block ...passed 00:35:20.744 Test: blockdev writev readv block ...passed 00:35:20.744 Test: blockdev writev readv size > 128k ...passed 00:35:20.744 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:20.744 Test: blockdev comparev and writev ...[2024-12-06 11:34:26.855591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:20.744 [2024-12-06 11:34:26.855617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.744 [2024-12-06 11:34:26.855628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:20.744 [2024-12-06 11:34:26.855634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.744 [2024-12-06 11:34:26.855917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:20.744 [2024-12-06 11:34:26.855926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:20.744 [2024-12-06 11:34:26.855935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:20.744 [2024-12-06 11:34:26.855941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:20.744 [2024-12-06 11:34:26.856219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:20.744 [2024-12-06 11:34:26.856226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:20.744 [2024-12-06 11:34:26.856236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:20.744 [2024-12-06 11:34:26.856241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:20.744 [2024-12-06 11:34:26.856510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:20.744 [2024-12-06 11:34:26.856518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:20.745 [2024-12-06 11:34:26.856531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:20.745 [2024-12-06 11:34:26.856537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:20.745 passed 00:35:21.006 Test: blockdev nvme passthru rw ...passed 00:35:21.006 Test: blockdev nvme passthru vendor specific ...[2024-12-06 11:34:26.940222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:21.006 [2024-12-06 11:34:26.940233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:21.006 [2024-12-06 11:34:26.940348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:21.007 [2024-12-06 11:34:26.940356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:21.007 [2024-12-06 11:34:26.940474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:21.007 [2024-12-06 11:34:26.940481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:21.007 [2024-12-06 11:34:26.940604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:21.007 [2024-12-06 11:34:26.940611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:21.007 passed 00:35:21.007 Test: blockdev nvme admin passthru ...passed 00:35:21.007 Test: blockdev copy ...passed 00:35:21.007 00:35:21.007 Run Summary: Type Total Ran Passed Failed Inactive 00:35:21.007 suites 1 1 n/a 0 0 00:35:21.007 tests 23 23 23 0 0 00:35:21.007 asserts 152 152 152 0 n/a 00:35:21.007 00:35:21.007 Elapsed time = 1.136 seconds 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:21.007 rmmod nvme_tcp 00:35:21.007 rmmod nvme_fabrics 00:35:21.007 rmmod nvme_keyring 00:35:21.007 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:21.268 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:21.268 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:21.268 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3725878 ']' 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3725878 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3725878 ']' 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3725878 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3725878 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3725878' 00:35:21.269 killing process with pid 3725878 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3725878 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3725878 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.269 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.818 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:23.818 00:35:23.818 real 0m12.958s 00:35:23.818 user 0m10.076s 00:35:23.818 sys 0m6.991s 00:35:23.818 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:23.818 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:23.818 ************************************ 00:35:23.818 END TEST nvmf_bdevio 00:35:23.818 ************************************ 00:35:23.818 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:23.818 00:35:23.818 real 5m10.496s 00:35:23.818 user 10m8.095s 00:35:23.818 sys 2m13.355s 00:35:23.819 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:23.819 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:23.819 ************************************ 00:35:23.819 END TEST nvmf_target_core_interrupt_mode 00:35:23.819 ************************************ 00:35:23.819 11:34:29 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:23.819 11:34:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:23.819 11:34:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:23.819 11:34:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:23.819 ************************************ 00:35:23.819 START TEST nvmf_interrupt 00:35:23.819 ************************************ 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:23.819 * Looking for test storage... 00:35:23.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:23.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.819 --rc genhtml_branch_coverage=1 00:35:23.819 --rc genhtml_function_coverage=1 00:35:23.819 --rc genhtml_legend=1 00:35:23.819 --rc geninfo_all_blocks=1 00:35:23.819 --rc geninfo_unexecuted_blocks=1 00:35:23.819 00:35:23.819 ' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:23.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.819 --rc genhtml_branch_coverage=1 00:35:23.819 --rc genhtml_function_coverage=1 00:35:23.819 --rc genhtml_legend=1 00:35:23.819 --rc geninfo_all_blocks=1 00:35:23.819 --rc geninfo_unexecuted_blocks=1 00:35:23.819 00:35:23.819 ' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:23.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.819 --rc genhtml_branch_coverage=1 00:35:23.819 --rc genhtml_function_coverage=1 00:35:23.819 --rc genhtml_legend=1 00:35:23.819 --rc geninfo_all_blocks=1 00:35:23.819 --rc geninfo_unexecuted_blocks=1 00:35:23.819 00:35:23.819 ' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:23.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.819 --rc genhtml_branch_coverage=1 00:35:23.819 --rc genhtml_function_coverage=1 00:35:23.819 --rc genhtml_legend=1 00:35:23.819 --rc geninfo_all_blocks=1 00:35:23.819 --rc geninfo_unexecuted_blocks=1 00:35:23.819 00:35:23.819 ' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:23.819 11:34:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:23.820 11:34:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:32.013 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:32.013 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:32.013 Found net devices under 0000:31:00.0: cvl_0_0 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.013 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:32.013 Found net devices under 0000:31:00.1: cvl_0_1 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:32.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:32.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:35:32.014 00:35:32.014 --- 10.0.0.2 ping statistics --- 00:35:32.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.014 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:32.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:32.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:35:32.014 00:35:32.014 --- 10.0.0.1 ping statistics --- 00:35:32.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.014 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3730911 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3730911 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3730911 ']' 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.014 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:32.014 [2024-12-06 11:34:37.559812] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:32.014 [2024-12-06 11:34:37.560809] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:35:32.014 [2024-12-06 11:34:37.560850] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.014 [2024-12-06 11:34:37.646165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:32.014 [2024-12-06 11:34:37.682114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.014 [2024-12-06 11:34:37.682146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.014 [2024-12-06 11:34:37.682153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.014 [2024-12-06 11:34:37.682161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.014 [2024-12-06 11:34:37.682166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.014 [2024-12-06 11:34:37.683335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.014 [2024-12-06 11:34:37.683337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.014 [2024-12-06 11:34:37.739318] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:32.014 [2024-12-06 11:34:37.740023] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:32.014 [2024-12-06 11:34:37.740274] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:32.275 5000+0 records in 00:35:32.275 5000+0 records out 00:35:32.275 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0189326 s, 541 MB/s 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.275 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.536 AIO0 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.536 [2024-12-06 11:34:38.451942] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:32.536 [2024-12-06 11:34:38.492281] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3730911 0 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3730911 0 idle 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3730911 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3730911 -w 256 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3730911 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0' 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3730911 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3730911 1 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3730911 1 idle 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3730911 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:32.536 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:32.537 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:32.537 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3730911 -w 256 00:35:32.537 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3730959 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3730959 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3731205 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3730911 0 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3730911 0 busy 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3730911 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3730911 -w 256 00:35:32.799 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3730911 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0' 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3730911 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3730911 1 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3730911 1 busy 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3730911 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3730911 -w 256 00:35:33.061 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:33.323 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3730959 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.30 reactor_1' 00:35:33.323 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3730959 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.30 reactor_1 00:35:33.323 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:33.323 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:33.323 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:33.323 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:33.323 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:33.323 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:33.323 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:33.323 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:33.323 11:34:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3731205 00:35:43.327 Initializing NVMe Controllers 00:35:43.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:43.327 Controller IO queue size 256, less than required. 00:35:43.327 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:43.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:43.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:43.327 Initialization complete. Launching workers. 00:35:43.327 ======================================================== 00:35:43.327 Latency(us) 00:35:43.327 Device Information : IOPS MiB/s Average min max 00:35:43.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16695.60 65.22 15344.14 2262.47 56470.75 00:35:43.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20373.90 79.59 12568.76 4780.03 31605.83 00:35:43.327 ======================================================== 00:35:43.327 Total : 37069.50 144.80 13818.75 2262.47 56470.75 00:35:43.327 00:35:43.327 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3730911 0 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3730911 0 idle 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3730911 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3730911 -w 256 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3730911 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.22 reactor_0' 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3730911 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.22 reactor_0 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3730911 1 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3730911 1 idle 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3730911 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3730911 -w 256 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3730959 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.98 reactor_1' 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3730959 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.98 reactor_1 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:43.328 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:43.904 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:43.904 11:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:43.904 11:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:43.904 11:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:43.904 11:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3730911 0 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3730911 0 idle 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3730911 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3730911 -w 256 00:35:45.816 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3730911 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.46 reactor_0' 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3730911 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.46 reactor_0 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3730911 1 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3730911 1 idle 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3730911 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3730911 -w 256 00:35:46.077 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:46.338 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3730959 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.10 reactor_1' 00:35:46.338 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3730959 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.10 reactor_1 00:35:46.338 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:46.338 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:46.338 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:46.338 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:46.338 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:46.338 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:46.338 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:46.338 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:46.338 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:46.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:46.600 rmmod nvme_tcp 00:35:46.600 rmmod nvme_fabrics 00:35:46.600 rmmod nvme_keyring 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:46.600 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3730911 ']' 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3730911 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3730911 ']' 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3730911 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3730911 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3730911' 00:35:46.601 killing process with pid 3730911 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3730911 00:35:46.601 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3730911 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:46.862 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.411 11:34:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:49.411 00:35:49.411 real 0m25.381s 00:35:49.411 user 0m40.082s 00:35:49.411 sys 0m9.715s 00:35:49.411 11:34:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:49.411 11:34:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:49.411 ************************************ 00:35:49.411 END TEST nvmf_interrupt 00:35:49.411 ************************************ 00:35:49.411 00:35:49.411 real 31m8.348s 00:35:49.411 user 61m36.928s 00:35:49.411 sys 10m57.082s 00:35:49.411 11:34:55 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:49.411 11:34:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:49.411 ************************************ 00:35:49.411 END TEST nvmf_tcp 00:35:49.411 ************************************ 00:35:49.411 11:34:55 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:49.411 11:34:55 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:49.411 11:34:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:49.411 11:34:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:49.411 11:34:55 -- common/autotest_common.sh@10 -- # set +x 00:35:49.411 ************************************ 00:35:49.411 START TEST spdkcli_nvmf_tcp 00:35:49.411 ************************************ 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:49.411 * Looking for test storage... 00:35:49.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:49.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.411 --rc genhtml_branch_coverage=1 00:35:49.411 --rc genhtml_function_coverage=1 00:35:49.411 --rc genhtml_legend=1 00:35:49.411 --rc geninfo_all_blocks=1 00:35:49.411 --rc geninfo_unexecuted_blocks=1 00:35:49.411 00:35:49.411 ' 00:35:49.411 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:49.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.411 --rc genhtml_branch_coverage=1 00:35:49.411 --rc genhtml_function_coverage=1 00:35:49.411 --rc genhtml_legend=1 00:35:49.411 --rc geninfo_all_blocks=1 00:35:49.411 --rc geninfo_unexecuted_blocks=1 00:35:49.411 00:35:49.411 ' 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:49.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.412 --rc genhtml_branch_coverage=1 00:35:49.412 --rc genhtml_function_coverage=1 00:35:49.412 --rc genhtml_legend=1 00:35:49.412 --rc geninfo_all_blocks=1 00:35:49.412 --rc geninfo_unexecuted_blocks=1 00:35:49.412 00:35:49.412 ' 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:49.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.412 --rc genhtml_branch_coverage=1 00:35:49.412 --rc genhtml_function_coverage=1 00:35:49.412 --rc genhtml_legend=1 00:35:49.412 --rc geninfo_all_blocks=1 00:35:49.412 --rc geninfo_unexecuted_blocks=1 00:35:49.412 00:35:49.412 ' 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:49.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3734403 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3734403 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3734403 ']' 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:49.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:49.412 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:49.412 [2024-12-06 11:34:55.384417] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:35:49.412 [2024-12-06 11:34:55.384483] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3734403 ] 00:35:49.412 [2024-12-06 11:34:55.469888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:49.412 [2024-12-06 11:34:55.513457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.412 [2024-12-06 11:34:55.513460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:50.353 11:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:50.353 11:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:50.353 11:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:50.353 11:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:50.353 11:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:50.353 11:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:50.353 11:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:50.353 11:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:50.353 11:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:50.353 11:34:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:50.353 11:34:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:50.353 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:50.353 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:50.354 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:50.354 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:50.354 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:50.354 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:50.354 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:50.354 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:50.354 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:50.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:50.354 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4 secure_channel=True allow_any_host=True'\'' 00:35:50.354 '\''/nvmf/referral/nqn.2014-08.org.nvmexpress.discovery/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.354 ' 00:35:52.892 [2024-12-06 11:34:58.643906] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:53.829 [2024-12-06 11:34:59.851882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:56.371 [2024-12-06 11:35:02.070541] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:58.284 [2024-12-06 11:35:03.976141] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:59.666 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:59.666 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:59.666 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:59.666 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:59.666 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:59.666 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:59.666 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:59.666 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:59.666 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:59.666 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:59.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:59.666 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4 secure_channel=True allow_any_host=True', False] 00:35:59.666 Executing command: ['/nvmf/referral/nqn.2014-08.org.nvmexpress.discovery/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.927 11:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@67 -- # timing_exit spdkcli_create_nvmf_config 00:35:59.927 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:59.927 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:59.927 11:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # timing_enter spdkcli_check_match 00:35:59.927 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:59.927 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:59.927 11:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # check_match 00:35:59.927 11:35:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:00.188 11:35:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:00.188 11:35:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:00.188 11:35:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@71 -- # timing_exit spdkcli_check_match 00:36:00.188 11:35:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.188 11:35:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.188 11:35:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@73 -- # timing_enter spdkcli_clear_nvmf_config 00:36:00.188 11:35:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.188 11:35:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.449 11:35:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:00.449 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:00.449 '\''/nvmf/referral/nqn.2014-08.org.nvmexpress.discovery/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:00.449 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:00.449 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:00.449 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:00.449 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:00.449 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:00.449 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:00.449 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:00.449 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:00.449 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:00.449 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:00.449 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:00.449 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:00.449 ' 00:36:05.739 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:05.739 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:05.739 Executing command: ['/nvmf/referral/nqn.2014-08.org.nvmexpress.discovery/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:05.739 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:05.739 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:05.739 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:05.739 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:05.739 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:05.739 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:05.739 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:05.739 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:05.739 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:05.739 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:05.739 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:05.739 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # timing_exit spdkcli_clear_nvmf_config 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@92 -- # killprocess 3734403 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3734403 ']' 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3734403 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3734403 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3734403' 00:36:05.739 killing process with pid 3734403 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3734403 00:36:05.739 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3734403 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3734403 ']' 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3734403 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3734403 ']' 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3734403 00:36:06.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3734403) - No such process 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3734403 is not found' 00:36:06.000 Process with pid 3734403 is not found 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:06.000 00:36:06.000 real 0m16.863s 00:36:06.000 user 0m34.915s 00:36:06.000 sys 0m0.714s 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.000 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:06.000 ************************************ 00:36:06.000 END TEST spdkcli_nvmf_tcp 00:36:06.000 ************************************ 00:36:06.000 11:35:11 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:06.000 11:35:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:06.000 11:35:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.000 11:35:11 -- common/autotest_common.sh@10 -- # set +x 00:36:06.000 ************************************ 00:36:06.000 START TEST nvmf_identify_passthru 00:36:06.000 ************************************ 00:36:06.000 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:06.000 * Looking for test storage... 00:36:06.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:06.000 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:06.000 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:36:06.000 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:06.262 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:06.262 11:35:12 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:06.262 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:06.262 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:06.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.262 --rc genhtml_branch_coverage=1 00:36:06.262 --rc genhtml_function_coverage=1 00:36:06.262 --rc genhtml_legend=1 00:36:06.262 --rc geninfo_all_blocks=1 00:36:06.262 --rc geninfo_unexecuted_blocks=1 00:36:06.262 00:36:06.262 ' 00:36:06.262 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:06.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.262 --rc genhtml_branch_coverage=1 00:36:06.262 --rc genhtml_function_coverage=1 00:36:06.262 --rc genhtml_legend=1 00:36:06.262 --rc geninfo_all_blocks=1 00:36:06.262 --rc geninfo_unexecuted_blocks=1 00:36:06.262 00:36:06.262 ' 00:36:06.262 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:06.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.262 --rc genhtml_branch_coverage=1 00:36:06.262 --rc genhtml_function_coverage=1 00:36:06.262 --rc genhtml_legend=1 00:36:06.262 --rc geninfo_all_blocks=1 00:36:06.262 --rc geninfo_unexecuted_blocks=1 00:36:06.262 00:36:06.262 ' 00:36:06.262 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:06.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.262 --rc genhtml_branch_coverage=1 00:36:06.262 --rc genhtml_function_coverage=1 00:36:06.262 --rc genhtml_legend=1 00:36:06.262 --rc geninfo_all_blocks=1 00:36:06.262 --rc geninfo_unexecuted_blocks=1 00:36:06.262 00:36:06.262 ' 00:36:06.262 11:35:12 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.262 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:06.262 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.262 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.262 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.262 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.262 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.262 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.262 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.262 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.263 11:35:12 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:06.263 11:35:12 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.263 11:35:12 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.263 11:35:12 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.263 11:35:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.263 11:35:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.263 11:35:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.263 11:35:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:06.263 11:35:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:06.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:06.263 11:35:12 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.263 11:35:12 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:06.263 11:35:12 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.263 11:35:12 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.263 11:35:12 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.263 11:35:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.263 11:35:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.263 11:35:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.263 11:35:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:06.263 11:35:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.263 11:35:12 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.263 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:06.263 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:06.263 11:35:12 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:06.263 11:35:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:14.409 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:14.409 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:14.409 Found net devices under 0000:31:00.0: cvl_0_0 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.409 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:14.410 Found net devices under 0000:31:00.1: cvl_0_1 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:14.410 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:14.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:36:14.671 00:36:14.671 --- 10.0.0.2 ping statistics --- 00:36:14.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.671 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:14.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:36:14.671 00:36:14.671 --- 10.0.0.1 ping statistics --- 00:36:14.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.671 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:14.671 11:35:20 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:14.671 11:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:14.671 11:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:36:14.671 11:35:20 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:36:14.671 11:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:14.671 11:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:14.671 11:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:14.671 11:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:14.671 11:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:15.243 11:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:36:15.243 11:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:15.243 11:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:15.243 11:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:15.814 11:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:15.814 11:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:15.814 11:35:21 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:15.814 11:35:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:15.814 11:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:15.814 11:35:21 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:15.814 11:35:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:15.814 11:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3742142 00:36:15.814 11:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:15.814 11:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:15.814 11:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3742142 00:36:15.814 11:35:21 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3742142 ']' 00:36:15.814 11:35:21 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.814 11:35:21 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.814 11:35:21 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.814 11:35:21 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.814 11:35:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:15.814 [2024-12-06 11:35:21.901371] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:36:15.814 [2024-12-06 11:35:21.901431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:16.074 [2024-12-06 11:35:21.989514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:16.074 [2024-12-06 11:35:22.030537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:16.074 [2024-12-06 11:35:22.030573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:16.074 [2024-12-06 11:35:22.030581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:16.074 [2024-12-06 11:35:22.030588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:16.074 [2024-12-06 11:35:22.030594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:16.074 [2024-12-06 11:35:22.032208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.074 [2024-12-06 11:35:22.032330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:16.074 [2024-12-06 11:35:22.032487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.074 [2024-12-06 11:35:22.032487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:16.646 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:16.646 INFO: Log level set to 20 00:36:16.646 INFO: Requests: 00:36:16.646 { 00:36:16.646 "jsonrpc": "2.0", 00:36:16.646 "method": "nvmf_set_config", 00:36:16.646 "id": 1, 00:36:16.646 "params": { 00:36:16.646 "admin_cmd_passthru": { 00:36:16.646 "identify_ctrlr": true 00:36:16.646 } 00:36:16.646 } 00:36:16.646 } 00:36:16.646 00:36:16.646 INFO: response: 00:36:16.646 { 00:36:16.646 "jsonrpc": "2.0", 00:36:16.646 "id": 1, 00:36:16.646 "result": true 00:36:16.646 } 00:36:16.646 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.646 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:16.646 INFO: Setting log level to 20 00:36:16.646 INFO: Setting log level to 20 00:36:16.646 INFO: Log level set to 20 00:36:16.646 INFO: Log level set to 20 00:36:16.646 INFO: Requests: 00:36:16.646 { 00:36:16.646 "jsonrpc": "2.0", 00:36:16.646 "method": "framework_start_init", 00:36:16.646 "id": 1 00:36:16.646 } 00:36:16.646 00:36:16.646 INFO: Requests: 00:36:16.646 { 00:36:16.646 "jsonrpc": "2.0", 00:36:16.646 "method": "framework_start_init", 00:36:16.646 "id": 1 00:36:16.646 } 00:36:16.646 00:36:16.646 [2024-12-06 11:35:22.777873] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:16.646 INFO: response: 00:36:16.646 { 00:36:16.646 "jsonrpc": "2.0", 00:36:16.646 "id": 1, 00:36:16.646 "result": true 00:36:16.646 } 00:36:16.646 00:36:16.646 INFO: response: 00:36:16.646 { 00:36:16.646 "jsonrpc": "2.0", 00:36:16.646 "id": 1, 00:36:16.646 "result": true 00:36:16.646 } 00:36:16.646 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.646 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:16.646 INFO: Setting log level to 40 00:36:16.646 INFO: Setting log level to 40 00:36:16.646 INFO: Setting log level to 40 00:36:16.646 [2024-12-06 11:35:22.791192] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.646 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:16.646 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:16.907 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:16.908 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.908 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.168 Nvme0n1 00:36:17.168 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.168 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:17.168 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.169 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.169 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.169 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:17.169 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.169 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.169 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.169 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:17.169 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.169 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.169 [2024-12-06 11:35:23.192175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.169 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.169 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:17.169 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.169 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.169 [ 00:36:17.169 { 00:36:17.169 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:17.169 "subtype": "Discovery", 00:36:17.169 "listen_addresses": [], 00:36:17.169 "allow_any_host": true, 00:36:17.169 "hosts": [] 00:36:17.169 }, 00:36:17.169 { 00:36:17.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:17.169 "subtype": "NVMe", 00:36:17.169 "listen_addresses": [ 00:36:17.169 { 00:36:17.169 "trtype": "TCP", 00:36:17.169 "adrfam": "IPv4", 00:36:17.169 "traddr": "10.0.0.2", 00:36:17.169 "trsvcid": "4420" 00:36:17.169 } 00:36:17.169 ], 00:36:17.169 "allow_any_host": true, 00:36:17.169 "hosts": [], 00:36:17.169 "serial_number": "SPDK00000000000001", 00:36:17.169 "model_number": "SPDK bdev Controller", 00:36:17.169 "max_namespaces": 1, 00:36:17.169 "min_cntlid": 1, 00:36:17.169 "max_cntlid": 65519, 00:36:17.169 "namespaces": [ 00:36:17.169 { 00:36:17.169 "nsid": 1, 00:36:17.169 "bdev_name": "Nvme0n1", 00:36:17.169 "name": "Nvme0n1", 00:36:17.169 "nguid": "3634473052605494002538450000002D", 00:36:17.169 "uuid": "36344730-5260-5494-0025-38450000002d" 00:36:17.169 } 00:36:17.169 ] 00:36:17.169 } 00:36:17.169 ] 00:36:17.169 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.169 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:17.169 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:17.169 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:17.429 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:36:17.429 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:17.429 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:17.429 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:17.690 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:17.690 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:36:17.690 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:17.690 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.690 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:17.690 11:35:23 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:17.690 11:35:23 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:17.690 11:35:23 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:17.690 11:35:23 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:17.690 11:35:23 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:17.690 11:35:23 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:17.690 11:35:23 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:17.690 rmmod nvme_tcp 00:36:17.690 rmmod nvme_fabrics 00:36:17.690 rmmod nvme_keyring 00:36:17.690 11:35:23 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:17.690 11:35:23 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:17.690 11:35:23 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:17.690 11:35:23 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3742142 ']' 00:36:17.690 11:35:23 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3742142 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3742142 ']' 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3742142 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3742142 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3742142' 00:36:17.690 killing process with pid 3742142 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3742142 00:36:17.690 11:35:23 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3742142 00:36:17.951 11:35:24 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:17.951 11:35:24 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:17.951 11:35:24 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:17.951 11:35:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:17.951 11:35:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:17.951 11:35:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:17.951 11:35:24 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:17.951 11:35:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:17.951 11:35:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:17.951 11:35:24 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.951 11:35:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:17.951 11:35:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.497 11:35:26 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:20.497 00:36:20.497 real 0m14.128s 00:36:20.497 user 0m10.679s 00:36:20.497 sys 0m7.385s 00:36:20.497 11:35:26 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.497 11:35:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.497 ************************************ 00:36:20.497 END TEST nvmf_identify_passthru 00:36:20.497 ************************************ 00:36:20.497 11:35:26 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:20.497 11:35:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:20.497 11:35:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:20.497 11:35:26 -- common/autotest_common.sh@10 -- # set +x 00:36:20.497 ************************************ 00:36:20.497 START TEST nvmf_dif 00:36:20.497 ************************************ 00:36:20.497 11:35:26 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:20.497 * Looking for test storage... 00:36:20.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:20.497 11:35:26 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:20.497 11:35:26 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:36:20.497 11:35:26 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:20.497 11:35:26 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:20.497 11:35:26 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:20.497 11:35:26 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:20.497 11:35:26 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:20.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.497 --rc genhtml_branch_coverage=1 00:36:20.497 --rc genhtml_function_coverage=1 00:36:20.497 --rc genhtml_legend=1 00:36:20.497 --rc geninfo_all_blocks=1 00:36:20.497 --rc geninfo_unexecuted_blocks=1 00:36:20.497 00:36:20.497 ' 00:36:20.497 11:35:26 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:20.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.497 --rc genhtml_branch_coverage=1 00:36:20.497 --rc genhtml_function_coverage=1 00:36:20.497 --rc genhtml_legend=1 00:36:20.498 --rc geninfo_all_blocks=1 00:36:20.498 --rc geninfo_unexecuted_blocks=1 00:36:20.498 00:36:20.498 ' 00:36:20.498 11:35:26 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:20.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.498 --rc genhtml_branch_coverage=1 00:36:20.498 --rc genhtml_function_coverage=1 00:36:20.498 --rc genhtml_legend=1 00:36:20.498 --rc geninfo_all_blocks=1 00:36:20.498 --rc geninfo_unexecuted_blocks=1 00:36:20.498 00:36:20.498 ' 00:36:20.498 11:35:26 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:20.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.498 --rc genhtml_branch_coverage=1 00:36:20.498 --rc genhtml_function_coverage=1 00:36:20.498 --rc genhtml_legend=1 00:36:20.498 --rc geninfo_all_blocks=1 00:36:20.498 --rc geninfo_unexecuted_blocks=1 00:36:20.498 00:36:20.498 ' 00:36:20.498 11:35:26 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:20.498 11:35:26 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:20.498 11:35:26 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:20.498 11:35:26 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:20.498 11:35:26 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:20.498 11:35:26 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.498 11:35:26 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.498 11:35:26 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.498 11:35:26 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:20.498 11:35:26 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:20.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:20.498 11:35:26 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:20.498 11:35:26 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:20.498 11:35:26 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:20.498 11:35:26 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:20.498 11:35:26 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.498 11:35:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:20.498 11:35:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:20.498 11:35:26 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:20.498 11:35:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:28.777 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:28.777 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:28.777 Found net devices under 0000:31:00.0: cvl_0_0 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:28.777 Found net devices under 0000:31:00.1: cvl_0_1 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:28.777 11:35:33 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:28.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:28.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:36:28.778 00:36:28.778 --- 10.0.0.2 ping statistics --- 00:36:28.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.778 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:28.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:28.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:36:28.778 00:36:28.778 --- 10.0.0.1 ping statistics --- 00:36:28.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.778 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:28.778 11:35:34 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:31.320 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:31.320 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:31.320 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:31.581 11:35:37 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:31.581 11:35:37 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:31.581 11:35:37 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:31.581 11:35:37 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:31.581 11:35:37 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:31.581 11:35:37 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:31.841 11:35:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:31.841 11:35:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:31.841 11:35:37 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:31.841 11:35:37 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:31.841 11:35:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:31.841 11:35:37 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3748795 00:36:31.841 11:35:37 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3748795 00:36:31.841 11:35:37 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3748795 ']' 00:36:31.842 11:35:37 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.842 11:35:37 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:31.842 11:35:37 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.842 11:35:37 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:31.842 11:35:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:31.842 11:35:37 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:31.842 [2024-12-06 11:35:37.838275] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:36:31.842 [2024-12-06 11:35:37.838345] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.842 [2024-12-06 11:35:37.931738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.842 [2024-12-06 11:35:37.971619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:31.842 [2024-12-06 11:35:37.971654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:31.842 [2024-12-06 11:35:37.971661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:31.842 [2024-12-06 11:35:37.971669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:31.842 [2024-12-06 11:35:37.971674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:31.842 [2024-12-06 11:35:37.972278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.783 11:35:38 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:32.783 11:35:38 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:32.783 11:35:38 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:32.783 11:35:38 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:32.783 11:35:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:32.783 11:35:38 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:32.783 11:35:38 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:32.783 11:35:38 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:32.783 11:35:38 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.783 11:35:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:32.783 [2024-12-06 11:35:38.658365] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.783 11:35:38 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.783 11:35:38 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:32.783 11:35:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:32.783 11:35:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:32.783 11:35:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:32.783 ************************************ 00:36:32.783 START TEST fio_dif_1_default 00:36:32.783 ************************************ 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:32.783 bdev_null0 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:32.783 [2024-12-06 11:35:38.730696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.783 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.784 { 00:36:32.784 "params": { 00:36:32.784 "name": "Nvme$subsystem", 00:36:32.784 "trtype": "$TEST_TRANSPORT", 00:36:32.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.784 "adrfam": "ipv4", 00:36:32.784 "trsvcid": "$NVMF_PORT", 00:36:32.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.784 "hdgst": ${hdgst:-false}, 00:36:32.784 "ddgst": ${ddgst:-false} 00:36:32.784 }, 00:36:32.784 "method": "bdev_nvme_attach_controller" 00:36:32.784 } 00:36:32.784 EOF 00:36:32.784 )") 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:32.784 "params": { 00:36:32.784 "name": "Nvme0", 00:36:32.784 "trtype": "tcp", 00:36:32.784 "traddr": "10.0.0.2", 00:36:32.784 "adrfam": "ipv4", 00:36:32.784 "trsvcid": "4420", 00:36:32.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.784 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:32.784 "hdgst": false, 00:36:32.784 "ddgst": false 00:36:32.784 }, 00:36:32.784 "method": "bdev_nvme_attach_controller" 00:36:32.784 }' 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:32.784 11:35:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.044 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:33.044 fio-3.35 00:36:33.044 Starting 1 thread 00:36:45.275 00:36:45.275 filename0: (groupid=0, jobs=1): err= 0: pid=3749326: Fri Dec 6 11:35:49 2024 00:36:45.275 read: IOPS=96, BW=387KiB/s (397kB/s)(3888KiB/10038msec) 00:36:45.275 slat (nsec): min=5507, max=49160, avg=6488.30, stdev=2156.13 00:36:45.275 clat (usec): min=40754, max=43425, avg=41288.76, stdev=557.35 00:36:45.275 lat (usec): min=40760, max=43463, avg=41295.25, stdev=557.50 00:36:45.275 clat percentiles (usec): 00:36:45.275 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:45.275 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:45.275 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:36:45.275 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:36:45.275 | 99.99th=[43254] 00:36:45.275 bw ( KiB/s): min= 384, max= 416, per=99.92%, avg=387.20, stdev= 9.85, samples=20 00:36:45.275 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:36:45.275 lat (msec) : 50=100.00% 00:36:45.275 cpu : usr=93.33%, sys=6.44%, ctx=15, majf=0, minf=257 00:36:45.275 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:45.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.275 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.275 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:45.275 00:36:45.275 Run status group 0 (all jobs): 00:36:45.275 READ: bw=387KiB/s (397kB/s), 387KiB/s-387KiB/s (397kB/s-397kB/s), io=3888KiB (3981kB), run=10038-10038msec 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.275 11:35:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.275 00:36:45.275 real 0m11.302s 00:36:45.275 user 0m26.302s 00:36:45.275 sys 0m0.975s 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 ************************************ 00:36:45.275 END TEST fio_dif_1_default 00:36:45.275 ************************************ 00:36:45.275 11:35:50 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:45.275 11:35:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:45.275 11:35:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:45.275 11:35:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 ************************************ 00:36:45.275 START TEST fio_dif_1_multi_subsystems 00:36:45.275 ************************************ 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 bdev_null0 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 [2024-12-06 11:35:50.113149] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 bdev_null1 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.275 { 00:36:45.275 "params": { 00:36:45.275 "name": "Nvme$subsystem", 00:36:45.275 "trtype": "$TEST_TRANSPORT", 00:36:45.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.275 "adrfam": "ipv4", 00:36:45.275 "trsvcid": "$NVMF_PORT", 00:36:45.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.275 "hdgst": ${hdgst:-false}, 00:36:45.275 "ddgst": ${ddgst:-false} 00:36:45.275 }, 00:36:45.275 "method": "bdev_nvme_attach_controller" 00:36:45.275 } 00:36:45.275 EOF 00:36:45.275 )") 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:45.275 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.276 { 00:36:45.276 "params": { 00:36:45.276 "name": "Nvme$subsystem", 00:36:45.276 "trtype": "$TEST_TRANSPORT", 00:36:45.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.276 "adrfam": "ipv4", 00:36:45.276 "trsvcid": "$NVMF_PORT", 00:36:45.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.276 "hdgst": ${hdgst:-false}, 00:36:45.276 "ddgst": ${ddgst:-false} 00:36:45.276 }, 00:36:45.276 "method": "bdev_nvme_attach_controller" 00:36:45.276 } 00:36:45.276 EOF 00:36:45.276 )") 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:45.276 "params": { 00:36:45.276 "name": "Nvme0", 00:36:45.276 "trtype": "tcp", 00:36:45.276 "traddr": "10.0.0.2", 00:36:45.276 "adrfam": "ipv4", 00:36:45.276 "trsvcid": "4420", 00:36:45.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:45.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:45.276 "hdgst": false, 00:36:45.276 "ddgst": false 00:36:45.276 }, 00:36:45.276 "method": "bdev_nvme_attach_controller" 00:36:45.276 },{ 00:36:45.276 "params": { 00:36:45.276 "name": "Nvme1", 00:36:45.276 "trtype": "tcp", 00:36:45.276 "traddr": "10.0.0.2", 00:36:45.276 "adrfam": "ipv4", 00:36:45.276 "trsvcid": "4420", 00:36:45.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:45.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:45.276 "hdgst": false, 00:36:45.276 "ddgst": false 00:36:45.276 }, 00:36:45.276 "method": "bdev_nvme_attach_controller" 00:36:45.276 }' 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:45.276 11:35:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.276 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:45.276 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:45.276 fio-3.35 00:36:45.276 Starting 2 threads 00:36:57.502 00:36:57.502 filename0: (groupid=0, jobs=1): err= 0: pid=3751584: Fri Dec 6 11:36:01 2024 00:36:57.502 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10027msec) 00:36:57.502 slat (nsec): min=5489, max=36547, avg=6554.75, stdev=1699.52 00:36:57.502 clat (usec): min=40778, max=43006, avg=41072.12, stdev=357.24 00:36:57.502 lat (usec): min=40787, max=43013, avg=41078.68, stdev=357.46 00:36:57.502 clat percentiles (usec): 00:36:57.502 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:57.502 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:57.502 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:36:57.502 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:36:57.502 | 99.99th=[43254] 00:36:57.502 bw ( KiB/s): min= 383, max= 416, per=49.72%, avg=388.75, stdev=11.75, samples=20 00:36:57.502 iops : min= 95, max= 104, avg=97.15, stdev= 2.96, samples=20 00:36:57.502 lat (msec) : 50=100.00% 00:36:57.502 cpu : usr=95.36%, sys=4.39%, ctx=14, majf=0, minf=140 00:36:57.502 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:57.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.502 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.502 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:57.502 filename1: (groupid=0, jobs=1): err= 0: pid=3751585: Fri Dec 6 11:36:01 2024 00:36:57.502 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10023msec) 00:36:57.502 slat (nsec): min=5484, max=46011, avg=6688.52, stdev=2160.91 00:36:57.502 clat (usec): min=837, max=43003, avg=40890.68, stdev=3665.52 00:36:57.502 lat (usec): min=845, max=43011, avg=40897.37, stdev=3665.07 00:36:57.502 clat percentiles (usec): 00:36:57.502 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:57.502 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:57.502 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:36:57.502 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:36:57.502 | 99.99th=[43254] 00:36:57.502 bw ( KiB/s): min= 384, max= 416, per=49.98%, avg=390.40, stdev=13.13, samples=20 00:36:57.502 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:36:57.502 lat (usec) : 1000=0.82% 00:36:57.502 lat (msec) : 50=99.18% 00:36:57.502 cpu : usr=95.10%, sys=4.66%, ctx=15, majf=0, minf=129 00:36:57.502 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:57.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.502 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.502 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:57.502 00:36:57.502 Run status group 0 (all jobs): 00:36:57.502 READ: bw=780KiB/s (799kB/s), 389KiB/s-391KiB/s (399kB/s-400kB/s), io=7824KiB (8012kB), run=10023-10027msec 00:36:57.502 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:57.502 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:57.502 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:57.502 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.503 00:36:57.503 real 0m11.663s 00:36:57.503 user 0m35.688s 00:36:57.503 sys 0m1.282s 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.503 11:36:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:57.503 ************************************ 00:36:57.503 END TEST fio_dif_1_multi_subsystems 00:36:57.503 ************************************ 00:36:57.503 11:36:01 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:57.503 11:36:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:57.503 11:36:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:57.503 11:36:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:57.503 ************************************ 00:36:57.503 START TEST fio_dif_rand_params 00:36:57.503 ************************************ 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.503 bdev_null0 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.503 [2024-12-06 11:36:01.871310] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:57.503 { 00:36:57.503 "params": { 00:36:57.503 "name": "Nvme$subsystem", 00:36:57.503 "trtype": "$TEST_TRANSPORT", 00:36:57.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:57.503 "adrfam": "ipv4", 00:36:57.503 "trsvcid": "$NVMF_PORT", 00:36:57.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:57.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:57.503 "hdgst": ${hdgst:-false}, 00:36:57.503 "ddgst": ${ddgst:-false} 00:36:57.503 }, 00:36:57.503 "method": "bdev_nvme_attach_controller" 00:36:57.503 } 00:36:57.503 EOF 00:36:57.503 )") 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:57.503 "params": { 00:36:57.503 "name": "Nvme0", 00:36:57.503 "trtype": "tcp", 00:36:57.503 "traddr": "10.0.0.2", 00:36:57.503 "adrfam": "ipv4", 00:36:57.503 "trsvcid": "4420", 00:36:57.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:57.503 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:57.503 "hdgst": false, 00:36:57.503 "ddgst": false 00:36:57.503 }, 00:36:57.503 "method": "bdev_nvme_attach_controller" 00:36:57.503 }' 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:57.503 11:36:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.503 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:57.503 ... 00:36:57.503 fio-3.35 00:36:57.503 Starting 3 threads 00:37:02.790 00:37:02.790 filename0: (groupid=0, jobs=1): err= 0: pid=3754147: Fri Dec 6 11:36:08 2024 00:37:02.790 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(143MiB/5014msec) 00:37:02.790 slat (nsec): min=5590, max=38641, avg=7737.13, stdev=1909.35 00:37:02.790 clat (usec): min=4304, max=89326, avg=13170.69, stdev=9644.42 00:37:02.790 lat (usec): min=4312, max=89335, avg=13178.43, stdev=9644.47 00:37:02.790 clat percentiles (usec): 00:37:02.790 | 1.00th=[ 5080], 5.00th=[ 6718], 10.00th=[ 7701], 20.00th=[ 8717], 00:37:02.790 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11207], 60.00th=[11994], 00:37:02.790 | 70.00th=[12780], 80.00th=[13566], 90.00th=[14746], 95.00th=[46924], 00:37:02.790 | 99.00th=[50594], 99.50th=[51643], 99.90th=[89654], 99.95th=[89654], 00:37:02.790 | 99.99th=[89654] 00:37:02.790 bw ( KiB/s): min=16896, max=37120, per=34.62%, avg=29132.80, stdev=5262.79, samples=10 00:37:02.790 iops : min= 132, max= 290, avg=227.60, stdev=41.12, samples=10 00:37:02.790 lat (msec) : 10=33.57%, 20=60.56%, 50=4.47%, 100=1.40% 00:37:02.790 cpu : usr=94.41%, sys=5.33%, ctx=9, majf=0, minf=84 00:37:02.790 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.790 issued rwts: total=1141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.790 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:02.790 filename0: (groupid=0, jobs=1): err= 0: pid=3754148: Fri Dec 6 11:36:08 2024 00:37:02.790 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(134MiB/5025msec) 00:37:02.790 slat (usec): min=5, max=218, avg= 8.16, stdev= 6.85 00:37:02.790 clat (usec): min=4978, max=91800, avg=14102.59, stdev=11351.66 00:37:02.790 lat (usec): min=4987, max=91806, avg=14110.75, stdev=11351.33 00:37:02.790 clat percentiles (usec): 00:37:02.790 | 1.00th=[ 5473], 5.00th=[ 7111], 10.00th=[ 8160], 20.00th=[ 9241], 00:37:02.790 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11469], 60.00th=[12125], 00:37:02.790 | 70.00th=[12911], 80.00th=[13960], 90.00th=[15270], 95.00th=[49546], 00:37:02.790 | 99.00th=[54264], 99.50th=[89654], 99.90th=[91751], 99.95th=[91751], 00:37:02.790 | 99.99th=[91751] 00:37:02.790 bw ( KiB/s): min=16384, max=32512, per=32.40%, avg=27264.00, stdev=5448.32, samples=10 00:37:02.790 iops : min= 128, max= 254, avg=213.00, stdev=42.56, samples=10 00:37:02.790 lat (msec) : 10=26.59%, 20=66.57%, 50=2.81%, 100=4.03% 00:37:02.790 cpu : usr=94.15%, sys=5.59%, ctx=11, majf=0, minf=172 00:37:02.790 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.790 issued rwts: total=1068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.790 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:02.790 filename0: (groupid=0, jobs=1): err= 0: pid=3754149: Fri Dec 6 11:36:08 2024 00:37:02.790 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(139MiB/5046msec) 00:37:02.790 slat (nsec): min=5582, max=32144, avg=7236.03, stdev=1841.11 00:37:02.790 clat (usec): min=4570, max=90592, avg=13614.51, stdev=9477.92 00:37:02.790 lat (usec): min=4575, max=90601, avg=13621.75, stdev=9478.22 00:37:02.790 clat percentiles (usec): 00:37:02.790 | 1.00th=[ 5473], 5.00th=[ 7308], 10.00th=[ 7832], 20.00th=[ 9110], 00:37:02.790 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11863], 60.00th=[12780], 00:37:02.790 | 70.00th=[13698], 80.00th=[14746], 90.00th=[15926], 95.00th=[17433], 00:37:02.790 | 99.00th=[52691], 99.50th=[56361], 99.90th=[89654], 99.95th=[90702], 00:37:02.790 | 99.99th=[90702] 00:37:02.790 bw ( KiB/s): min=19238, max=40448, per=33.62%, avg=28291.80, stdev=5458.65, samples=10 00:37:02.790 iops : min= 150, max= 316, avg=221.00, stdev=42.70, samples=10 00:37:02.790 lat (msec) : 10=26.81%, 20=68.50%, 50=2.71%, 100=1.99% 00:37:02.790 cpu : usr=93.97%, sys=5.75%, ctx=8, majf=0, minf=104 00:37:02.790 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.790 issued rwts: total=1108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.790 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:02.790 00:37:02.790 Run status group 0 (all jobs): 00:37:02.790 READ: bw=82.2MiB/s (86.2MB/s), 26.6MiB/s-28.4MiB/s (27.9MB/s-29.8MB/s), io=415MiB (435MB), run=5014-5046msec 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.790 bdev_null0 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.790 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.791 [2024-12-06 11:36:08.197032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.791 bdev_null1 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.791 bdev_null2 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:02.791 { 00:37:02.791 "params": { 00:37:02.791 "name": "Nvme$subsystem", 00:37:02.791 "trtype": "$TEST_TRANSPORT", 00:37:02.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:02.791 "adrfam": "ipv4", 00:37:02.791 "trsvcid": "$NVMF_PORT", 00:37:02.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:02.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:02.791 "hdgst": ${hdgst:-false}, 00:37:02.791 "ddgst": ${ddgst:-false} 00:37:02.791 }, 00:37:02.791 "method": "bdev_nvme_attach_controller" 00:37:02.791 } 00:37:02.791 EOF 00:37:02.791 )") 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:02.791 { 00:37:02.791 "params": { 00:37:02.791 "name": "Nvme$subsystem", 00:37:02.791 "trtype": "$TEST_TRANSPORT", 00:37:02.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:02.791 "adrfam": "ipv4", 00:37:02.791 "trsvcid": "$NVMF_PORT", 00:37:02.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:02.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:02.791 "hdgst": ${hdgst:-false}, 00:37:02.791 "ddgst": ${ddgst:-false} 00:37:02.791 }, 00:37:02.791 "method": "bdev_nvme_attach_controller" 00:37:02.791 } 00:37:02.791 EOF 00:37:02.791 )") 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:02.791 { 00:37:02.791 "params": { 00:37:02.791 "name": "Nvme$subsystem", 00:37:02.791 "trtype": "$TEST_TRANSPORT", 00:37:02.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:02.791 "adrfam": "ipv4", 00:37:02.791 "trsvcid": "$NVMF_PORT", 00:37:02.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:02.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:02.791 "hdgst": ${hdgst:-false}, 00:37:02.791 "ddgst": ${ddgst:-false} 00:37:02.791 }, 00:37:02.791 "method": "bdev_nvme_attach_controller" 00:37:02.791 } 00:37:02.791 EOF 00:37:02.791 )") 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:02.791 11:36:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:02.791 "params": { 00:37:02.791 "name": "Nvme0", 00:37:02.791 "trtype": "tcp", 00:37:02.791 "traddr": "10.0.0.2", 00:37:02.791 "adrfam": "ipv4", 00:37:02.791 "trsvcid": "4420", 00:37:02.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:02.791 "hdgst": false, 00:37:02.791 "ddgst": false 00:37:02.791 }, 00:37:02.791 "method": "bdev_nvme_attach_controller" 00:37:02.791 },{ 00:37:02.791 "params": { 00:37:02.791 "name": "Nvme1", 00:37:02.791 "trtype": "tcp", 00:37:02.791 "traddr": "10.0.0.2", 00:37:02.791 "adrfam": "ipv4", 00:37:02.791 "trsvcid": "4420", 00:37:02.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:02.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:02.791 "hdgst": false, 00:37:02.791 "ddgst": false 00:37:02.791 }, 00:37:02.791 "method": "bdev_nvme_attach_controller" 00:37:02.791 },{ 00:37:02.792 "params": { 00:37:02.792 "name": "Nvme2", 00:37:02.792 "trtype": "tcp", 00:37:02.792 "traddr": "10.0.0.2", 00:37:02.792 "adrfam": "ipv4", 00:37:02.792 "trsvcid": "4420", 00:37:02.792 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:02.792 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:02.792 "hdgst": false, 00:37:02.792 "ddgst": false 00:37:02.792 }, 00:37:02.792 "method": "bdev_nvme_attach_controller" 00:37:02.792 }' 00:37:02.792 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:02.792 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:02.792 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:02.792 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.792 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:02.792 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:02.792 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:02.792 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:02.792 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:02.792 11:36:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:02.792 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:02.792 ... 00:37:02.792 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:02.792 ... 00:37:02.792 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:02.792 ... 00:37:02.792 fio-3.35 00:37:02.792 Starting 24 threads 00:37:15.028 00:37:15.028 filename0: (groupid=0, jobs=1): err= 0: pid=3755774: Fri Dec 6 11:36:19 2024 00:37:15.028 read: IOPS=504, BW=2018KiB/s (2066kB/s)(19.8MiB/10024msec) 00:37:15.028 slat (nsec): min=5676, max=99024, avg=9232.20, stdev=7292.88 00:37:15.028 clat (usec): min=10076, max=35298, avg=31640.98, stdev=3871.84 00:37:15.028 lat (usec): min=10094, max=35304, avg=31650.21, stdev=3871.15 00:37:15.028 clat percentiles (usec): 00:37:15.028 | 1.00th=[18220], 5.00th=[21365], 10.00th=[24249], 20.00th=[32375], 00:37:15.028 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:15.028 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:37:15.028 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:37:15.028 | 99.99th=[35390] 00:37:15.028 bw ( KiB/s): min= 1916, max= 2176, per=4.30%, avg=2015.10, stdev=116.01, samples=20 00:37:15.028 iops : min= 479, max= 544, avg=503.70, stdev=28.94, samples=20 00:37:15.028 lat (msec) : 20=2.49%, 50=97.51% 00:37:15.028 cpu : usr=98.69%, sys=0.83%, ctx=92, majf=0, minf=90 00:37:15.028 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:15.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.028 filename0: (groupid=0, jobs=1): err= 0: pid=3755776: Fri Dec 6 11:36:19 2024 00:37:15.028 read: IOPS=486, BW=1947KiB/s (1994kB/s)(19.1MiB/10019msec) 00:37:15.028 slat (nsec): min=5691, max=87170, avg=16221.91, stdev=12227.96 00:37:15.028 clat (usec): min=16951, max=46531, avg=32714.76, stdev=1855.67 00:37:15.028 lat (usec): min=16960, max=46539, avg=32730.98, stdev=1855.06 00:37:15.028 clat percentiles (usec): 00:37:15.028 | 1.00th=[21103], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:37:15.028 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:15.028 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:15.028 | 99.00th=[34341], 99.50th=[34866], 99.90th=[46400], 99.95th=[46400], 00:37:15.028 | 99.99th=[46400] 00:37:15.028 bw ( KiB/s): min= 1896, max= 2048, per=4.15%, avg=1945.42, stdev=54.05, samples=19 00:37:15.028 iops : min= 474, max= 512, avg=486.32, stdev=13.44, samples=19 00:37:15.028 lat (msec) : 20=0.57%, 50=99.43% 00:37:15.028 cpu : usr=98.80%, sys=0.81%, ctx=119, majf=0, minf=75 00:37:15.028 IO depths : 1=6.0%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:15.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 issued rwts: total=4877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.028 filename0: (groupid=0, jobs=1): err= 0: pid=3755777: Fri Dec 6 11:36:19 2024 00:37:15.028 read: IOPS=484, BW=1937KiB/s (1984kB/s)(18.9MiB/10009msec) 00:37:15.028 slat (usec): min=4, max=103, avg=25.92, stdev=17.85 00:37:15.028 clat (usec): min=17199, max=61434, avg=32806.79, stdev=2176.50 00:37:15.028 lat (usec): min=17205, max=61458, avg=32832.71, stdev=2176.19 00:37:15.028 clat percentiles (usec): 00:37:15.028 | 1.00th=[26084], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:37:15.028 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:15.028 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:15.028 | 99.00th=[35390], 99.50th=[40633], 99.90th=[61604], 99.95th=[61604], 00:37:15.028 | 99.99th=[61604] 00:37:15.028 bw ( KiB/s): min= 1664, max= 2048, per=4.13%, avg=1932.95, stdev=82.26, samples=19 00:37:15.028 iops : min= 416, max= 512, avg=483.16, stdev=20.45, samples=19 00:37:15.028 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:37:15.028 cpu : usr=99.02%, sys=0.67%, ctx=16, majf=0, minf=86 00:37:15.028 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:15.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.028 filename0: (groupid=0, jobs=1): err= 0: pid=3755778: Fri Dec 6 11:36:19 2024 00:37:15.028 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10014msec) 00:37:15.028 slat (usec): min=5, max=103, avg=15.49, stdev=14.13 00:37:15.028 clat (usec): min=17856, max=46357, avg=32817.66, stdev=1810.82 00:37:15.028 lat (usec): min=17862, max=46370, avg=32833.15, stdev=1810.36 00:37:15.028 clat percentiles (usec): 00:37:15.028 | 1.00th=[20841], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:37:15.028 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:37:15.028 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:37:15.028 | 99.00th=[35390], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:37:15.028 | 99.99th=[46400] 00:37:15.028 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1940.37, stdev=62.22, samples=19 00:37:15.028 iops : min= 448, max= 512, avg=485.05, stdev=15.65, samples=19 00:37:15.028 lat (msec) : 20=0.29%, 50=99.71% 00:37:15.028 cpu : usr=99.10%, sys=0.60%, ctx=16, majf=0, minf=75 00:37:15.028 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:15.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.028 filename0: (groupid=0, jobs=1): err= 0: pid=3755780: Fri Dec 6 11:36:19 2024 00:37:15.028 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10010msec) 00:37:15.028 slat (usec): min=5, max=112, avg=25.85, stdev=18.46 00:37:15.028 clat (usec): min=4124, max=35318, avg=32186.34, stdev=3917.91 00:37:15.028 lat (usec): min=4143, max=35325, avg=32212.19, stdev=3918.43 00:37:15.028 clat percentiles (usec): 00:37:15.028 | 1.00th=[ 5538], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:37:15.028 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:15.028 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:15.028 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:37:15.028 | 99.99th=[35390] 00:37:15.028 bw ( KiB/s): min= 1916, max= 2560, per=4.21%, avg=1973.68, stdev=149.82, samples=19 00:37:15.028 iops : min= 479, max= 640, avg=493.42, stdev=37.45, samples=19 00:37:15.028 lat (msec) : 10=1.29%, 20=1.29%, 50=97.41% 00:37:15.028 cpu : usr=98.69%, sys=1.01%, ctx=14, majf=0, minf=72 00:37:15.028 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:15.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.028 filename0: (groupid=0, jobs=1): err= 0: pid=3755781: Fri Dec 6 11:36:19 2024 00:37:15.028 read: IOPS=487, BW=1952KiB/s (1998kB/s)(19.1MiB/10010msec) 00:37:15.028 slat (nsec): min=5132, max=97623, avg=25101.26, stdev=17131.73 00:37:15.028 clat (usec): min=10480, max=62098, avg=32569.12, stdev=3472.32 00:37:15.028 lat (usec): min=10486, max=62113, avg=32594.22, stdev=3472.71 00:37:15.028 clat percentiles (usec): 00:37:15.028 | 1.00th=[21103], 5.00th=[26870], 10.00th=[31851], 20.00th=[32375], 00:37:15.028 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:37:15.028 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:37:15.028 | 99.00th=[45351], 99.50th=[49021], 99.90th=[62129], 99.95th=[62129], 00:37:15.028 | 99.99th=[62129] 00:37:15.028 bw ( KiB/s): min= 1660, max= 2096, per=4.14%, avg=1941.16, stdev=88.53, samples=19 00:37:15.028 iops : min= 415, max= 524, avg=485.21, stdev=22.04, samples=19 00:37:15.028 lat (msec) : 20=0.45%, 50=99.06%, 100=0.49% 00:37:15.028 cpu : usr=98.98%, sys=0.72%, ctx=26, majf=0, minf=73 00:37:15.028 IO depths : 1=3.5%, 2=8.8%, 4=21.6%, 8=56.6%, 16=9.5%, 32=0.0%, >=64=0.0% 00:37:15.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 complete : 0=0.0%, 4=93.4%, 8=1.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 issued rwts: total=4884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.028 filename0: (groupid=0, jobs=1): err= 0: pid=3755782: Fri Dec 6 11:36:19 2024 00:37:15.028 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10009msec) 00:37:15.028 slat (usec): min=5, max=109, avg=18.93, stdev=16.84 00:37:15.028 clat (usec): min=16724, max=76619, avg=33197.57, stdev=5127.27 00:37:15.028 lat (usec): min=16730, max=76635, avg=33216.50, stdev=5126.35 00:37:15.028 clat percentiles (usec): 00:37:15.028 | 1.00th=[21627], 5.00th=[25822], 10.00th=[26870], 20.00th=[30278], 00:37:15.028 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:37:15.028 | 70.00th=[33817], 80.00th=[34866], 90.00th=[39060], 95.00th=[41681], 00:37:15.028 | 99.00th=[49546], 99.50th=[51643], 99.90th=[61604], 99.95th=[61604], 00:37:15.028 | 99.99th=[77071] 00:37:15.028 bw ( KiB/s): min= 1728, max= 2016, per=4.09%, avg=1916.21, stdev=76.37, samples=19 00:37:15.028 iops : min= 432, max= 504, avg=479.05, stdev=19.09, samples=19 00:37:15.028 lat (msec) : 20=0.73%, 50=98.28%, 100=1.00% 00:37:15.028 cpu : usr=98.68%, sys=0.86%, ctx=54, majf=0, minf=121 00:37:15.028 IO depths : 1=0.1%, 2=0.3%, 4=3.6%, 8=80.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:37:15.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 complete : 0=0.0%, 4=89.2%, 8=8.5%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.028 issued rwts: total=4812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.028 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.029 filename0: (groupid=0, jobs=1): err= 0: pid=3755783: Fri Dec 6 11:36:19 2024 00:37:15.029 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10024msec) 00:37:15.029 slat (usec): min=5, max=103, avg=28.64, stdev=16.97 00:37:15.029 clat (usec): min=11708, max=35319, avg=32499.90, stdev=2284.84 00:37:15.029 lat (usec): min=11724, max=35371, avg=32528.54, stdev=2285.04 00:37:15.029 clat percentiles (usec): 00:37:15.029 | 1.00th=[17957], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:15.029 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:37:15.029 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:15.029 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:37:15.029 | 99.99th=[35390] 00:37:15.029 bw ( KiB/s): min= 1912, max= 2176, per=4.17%, avg=1951.15, stdev=70.38, samples=20 00:37:15.029 iops : min= 478, max= 544, avg=487.75, stdev=17.54, samples=20 00:37:15.029 lat (msec) : 20=1.31%, 50=98.69% 00:37:15.029 cpu : usr=98.92%, sys=0.70%, ctx=71, majf=0, minf=61 00:37:15.029 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:15.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.029 filename1: (groupid=0, jobs=1): err= 0: pid=3755784: Fri Dec 6 11:36:19 2024 00:37:15.029 read: IOPS=484, BW=1939KiB/s (1985kB/s)(19.0MiB/10026msec) 00:37:15.029 slat (nsec): min=5675, max=72094, avg=11829.17, stdev=7209.85 00:37:15.029 clat (usec): min=17915, max=54000, avg=32905.09, stdev=3552.68 00:37:15.029 lat (usec): min=17922, max=54007, avg=32916.92, stdev=3552.85 00:37:15.029 clat percentiles (usec): 00:37:15.029 | 1.00th=[19792], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:37:15.029 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:37:15.029 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:37:15.029 | 99.00th=[45876], 99.50th=[46924], 99.90th=[53740], 99.95th=[53740], 00:37:15.029 | 99.99th=[53740] 00:37:15.029 bw ( KiB/s): min= 1888, max= 2048, per=4.14%, avg=1939.95, stdev=44.75, samples=19 00:37:15.029 iops : min= 472, max= 512, avg=484.95, stdev=11.12, samples=19 00:37:15.029 lat (msec) : 20=1.23%, 50=98.56%, 100=0.21% 00:37:15.029 cpu : usr=98.44%, sys=1.08%, ctx=98, majf=0, minf=58 00:37:15.029 IO depths : 1=3.1%, 2=7.2%, 4=21.6%, 8=58.6%, 16=9.5%, 32=0.0%, >=64=0.0% 00:37:15.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 issued rwts: total=4860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.029 filename1: (groupid=0, jobs=1): err= 0: pid=3755787: Fri Dec 6 11:36:19 2024 00:37:15.029 read: IOPS=484, BW=1937KiB/s (1984kB/s)(18.9MiB/10009msec) 00:37:15.029 slat (nsec): min=4024, max=91462, avg=21244.17, stdev=14269.01 00:37:15.029 clat (usec): min=16504, max=62672, avg=32862.15, stdev=2529.95 00:37:15.029 lat (usec): min=16535, max=62684, avg=32883.39, stdev=2529.20 00:37:15.029 clat percentiles (usec): 00:37:15.029 | 1.00th=[24249], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:15.029 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:15.029 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:37:15.029 | 99.00th=[41157], 99.50th=[42730], 99.90th=[62653], 99.95th=[62653], 00:37:15.029 | 99.99th=[62653] 00:37:15.029 bw ( KiB/s): min= 1667, max= 2048, per=4.13%, avg=1933.16, stdev=82.14, samples=19 00:37:15.029 iops : min= 416, max= 512, avg=483.21, stdev=20.61, samples=19 00:37:15.029 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:37:15.029 cpu : usr=98.62%, sys=0.96%, ctx=124, majf=0, minf=55 00:37:15.029 IO depths : 1=5.4%, 2=11.5%, 4=24.5%, 8=51.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:15.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.029 filename1: (groupid=0, jobs=1): err= 0: pid=3755788: Fri Dec 6 11:36:19 2024 00:37:15.029 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10024msec) 00:37:15.029 slat (usec): min=5, max=110, avg=22.87, stdev=19.44 00:37:15.029 clat (usec): min=11580, max=35281, avg=32569.56, stdev=2294.82 00:37:15.029 lat (usec): min=11597, max=35298, avg=32592.42, stdev=2293.22 00:37:15.029 clat percentiles (usec): 00:37:15.029 | 1.00th=[18482], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:37:15.029 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:15.029 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:37:15.029 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:37:15.029 | 99.99th=[35390] 00:37:15.029 bw ( KiB/s): min= 1912, max= 2176, per=4.17%, avg=1951.15, stdev=70.38, samples=20 00:37:15.029 iops : min= 478, max= 544, avg=487.75, stdev=17.54, samples=20 00:37:15.029 lat (msec) : 20=1.31%, 50=98.69% 00:37:15.029 cpu : usr=98.61%, sys=0.88%, ctx=88, majf=0, minf=73 00:37:15.029 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:15.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.029 filename1: (groupid=0, jobs=1): err= 0: pid=3755789: Fri Dec 6 11:36:19 2024 00:37:15.029 read: IOPS=485, BW=1942KiB/s (1989kB/s)(19.0MiB/10016msec) 00:37:15.029 slat (nsec): min=5679, max=96832, avg=10778.59, stdev=9040.00 00:37:15.029 clat (usec): min=17541, max=47835, avg=32860.99, stdev=2093.29 00:37:15.029 lat (usec): min=17548, max=47841, avg=32871.77, stdev=2093.05 00:37:15.029 clat percentiles (usec): 00:37:15.029 | 1.00th=[21103], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:37:15.029 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:37:15.029 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:37:15.029 | 99.00th=[40109], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:37:15.029 | 99.99th=[47973] 00:37:15.029 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1940.21, stdev=62.62, samples=19 00:37:15.029 iops : min= 448, max= 512, avg=485.05, stdev=15.65, samples=19 00:37:15.029 lat (msec) : 20=0.64%, 50=99.36% 00:37:15.029 cpu : usr=98.91%, sys=0.80%, ctx=16, majf=0, minf=62 00:37:15.029 IO depths : 1=5.6%, 2=11.7%, 4=24.8%, 8=51.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:15.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.029 filename1: (groupid=0, jobs=1): err= 0: pid=3755791: Fri Dec 6 11:36:19 2024 00:37:15.029 read: IOPS=483, BW=1935KiB/s (1981kB/s)(18.9MiB/10011msec) 00:37:15.029 slat (nsec): min=5687, max=99599, avg=25676.58, stdev=16919.01 00:37:15.029 clat (usec): min=10383, max=62569, avg=32830.88, stdev=2647.91 00:37:15.029 lat (usec): min=10389, max=62585, avg=32856.55, stdev=2647.22 00:37:15.029 clat percentiles (usec): 00:37:15.029 | 1.00th=[25560], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:37:15.029 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:37:15.029 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:37:15.029 | 99.00th=[42206], 99.50th=[47973], 99.90th=[62653], 99.95th=[62653], 00:37:15.029 | 99.99th=[62653] 00:37:15.029 bw ( KiB/s): min= 1667, max= 2048, per=4.12%, avg=1929.00, stdev=79.21, samples=19 00:37:15.029 iops : min= 416, max= 512, avg=482.21, stdev=19.94, samples=19 00:37:15.029 lat (msec) : 20=0.27%, 50=99.24%, 100=0.50% 00:37:15.029 cpu : usr=98.86%, sys=0.77%, ctx=63, majf=0, minf=52 00:37:15.029 IO depths : 1=5.4%, 2=11.1%, 4=22.9%, 8=53.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:37:15.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 issued rwts: total=4842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.029 filename1: (groupid=0, jobs=1): err= 0: pid=3755792: Fri Dec 6 11:36:19 2024 00:37:15.029 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10007msec) 00:37:15.029 slat (nsec): min=5693, max=69986, avg=13668.94, stdev=8164.90 00:37:15.029 clat (usec): min=17684, max=63190, avg=32902.30, stdev=1961.46 00:37:15.029 lat (usec): min=17690, max=63206, avg=32915.97, stdev=1961.22 00:37:15.029 clat percentiles (usec): 00:37:15.029 | 1.00th=[27132], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:37:15.029 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:15.029 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:37:15.029 | 99.00th=[34866], 99.50th=[45876], 99.90th=[52691], 99.95th=[52691], 00:37:15.029 | 99.99th=[63177] 00:37:15.029 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1933.05, stdev=57.13, samples=19 00:37:15.029 iops : min= 448, max= 512, avg=483.26, stdev=14.28, samples=19 00:37:15.029 lat (msec) : 20=0.21%, 50=99.46%, 100=0.33% 00:37:15.029 cpu : usr=99.09%, sys=0.61%, ctx=17, majf=0, minf=54 00:37:15.029 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:15.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.029 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.029 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.029 filename1: (groupid=0, jobs=1): err= 0: pid=3755793: Fri Dec 6 11:36:19 2024 00:37:15.029 read: IOPS=506, BW=2025KiB/s (2073kB/s)(19.8MiB/10029msec) 00:37:15.029 slat (usec): min=5, max=103, avg=24.03, stdev=17.22 00:37:15.029 clat (usec): min=3454, max=53029, avg=31404.54, stdev=5081.53 00:37:15.029 lat (usec): min=3474, max=53039, avg=31428.57, stdev=5085.08 00:37:15.029 clat percentiles (usec): 00:37:15.029 | 1.00th=[ 9241], 5.00th=[21103], 10.00th=[23200], 20.00th=[32113], 00:37:15.029 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:37:15.029 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:37:15.029 | 99.00th=[44303], 99.50th=[44827], 99.90th=[49021], 99.95th=[51643], 00:37:15.029 | 99.99th=[53216] 00:37:15.029 bw ( KiB/s): min= 1860, max= 3072, per=4.32%, avg=2021.35, stdev=267.83, samples=20 00:37:15.030 iops : min= 465, max= 768, avg=505.30, stdev=66.91, samples=20 00:37:15.030 lat (msec) : 4=0.28%, 10=0.87%, 20=1.14%, 50=97.64%, 100=0.08% 00:37:15.030 cpu : usr=98.81%, sys=0.88%, ctx=15, majf=0, minf=44 00:37:15.030 IO depths : 1=3.3%, 2=8.7%, 4=22.3%, 8=56.5%, 16=9.3%, 32=0.0%, >=64=0.0% 00:37:15.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 issued rwts: total=5076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.030 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.030 filename1: (groupid=0, jobs=1): err= 0: pid=3755794: Fri Dec 6 11:36:19 2024 00:37:15.030 read: IOPS=484, BW=1937KiB/s (1984kB/s)(18.9MiB/10011msec) 00:37:15.030 slat (usec): min=5, max=106, avg=28.43, stdev=18.04 00:37:15.030 clat (usec): min=10628, max=62380, avg=32750.59, stdev=2157.61 00:37:15.030 lat (usec): min=10634, max=62396, avg=32779.01, stdev=2157.48 00:37:15.030 clat percentiles (usec): 00:37:15.030 | 1.00th=[30278], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:37:15.030 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:37:15.030 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:15.030 | 99.00th=[34866], 99.50th=[35390], 99.90th=[62129], 99.95th=[62129], 00:37:15.030 | 99.99th=[62129] 00:37:15.030 bw ( KiB/s): min= 1667, max= 2048, per=4.13%, avg=1933.21, stdev=83.75, samples=19 00:37:15.030 iops : min= 416, max= 512, avg=483.26, stdev=21.07, samples=19 00:37:15.030 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:37:15.030 cpu : usr=98.73%, sys=0.79%, ctx=74, majf=0, minf=64 00:37:15.030 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:15.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.030 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.030 filename2: (groupid=0, jobs=1): err= 0: pid=3755795: Fri Dec 6 11:36:19 2024 00:37:15.030 read: IOPS=483, BW=1933KiB/s (1979kB/s)(19.0MiB/10051msec) 00:37:15.030 slat (usec): min=5, max=108, avg=23.96, stdev=18.56 00:37:15.030 clat (usec): min=20889, max=65731, avg=32885.41, stdev=2075.31 00:37:15.030 lat (usec): min=20896, max=65738, avg=32909.37, stdev=2074.69 00:37:15.030 clat percentiles (usec): 00:37:15.030 | 1.00th=[27395], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:15.030 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:15.030 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:37:15.030 | 99.00th=[39584], 99.50th=[44303], 99.90th=[65799], 99.95th=[65799], 00:37:15.030 | 99.99th=[65799] 00:37:15.030 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1935.45, stdev=57.62, samples=20 00:37:15.030 iops : min= 448, max= 512, avg=483.85, stdev=14.41, samples=20 00:37:15.030 lat (msec) : 50=99.59%, 100=0.41% 00:37:15.030 cpu : usr=98.93%, sys=0.77%, ctx=21, majf=0, minf=49 00:37:15.030 IO depths : 1=5.7%, 2=11.6%, 4=23.6%, 8=52.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:15.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 issued rwts: total=4856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.030 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.030 filename2: (groupid=0, jobs=1): err= 0: pid=3755796: Fri Dec 6 11:36:19 2024 00:37:15.030 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10013msec) 00:37:15.030 slat (usec): min=5, max=112, avg=26.90, stdev=19.90 00:37:15.030 clat (usec): min=10137, max=35357, avg=32491.91, stdev=2388.59 00:37:15.030 lat (usec): min=10149, max=35365, avg=32518.81, stdev=2388.71 00:37:15.030 clat percentiles (usec): 00:37:15.030 | 1.00th=[17171], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:37:15.030 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:15.030 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:15.030 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:37:15.030 | 99.99th=[35390] 00:37:15.030 bw ( KiB/s): min= 1912, max= 2180, per=4.17%, avg=1951.35, stdev=71.05, samples=20 00:37:15.030 iops : min= 478, max= 545, avg=487.80, stdev=17.71, samples=20 00:37:15.030 lat (msec) : 20=1.31%, 50=98.69% 00:37:15.030 cpu : usr=98.73%, sys=0.88%, ctx=36, majf=0, minf=74 00:37:15.030 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:15.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.030 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.030 filename2: (groupid=0, jobs=1): err= 0: pid=3755797: Fri Dec 6 11:36:19 2024 00:37:15.030 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.2MiB/10021msec) 00:37:15.030 slat (nsec): min=5681, max=82015, avg=16689.24, stdev=11883.19 00:37:15.030 clat (usec): min=13231, max=50925, avg=32539.44, stdev=2753.56 00:37:15.030 lat (usec): min=13238, max=50935, avg=32556.13, stdev=2754.39 00:37:15.030 clat percentiles (usec): 00:37:15.030 | 1.00th=[20317], 5.00th=[31327], 10.00th=[32113], 20.00th=[32375], 00:37:15.030 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:15.030 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:15.030 | 99.00th=[39060], 99.50th=[45351], 99.90th=[51119], 99.95th=[51119], 00:37:15.030 | 99.99th=[51119] 00:37:15.030 bw ( KiB/s): min= 1792, max= 2096, per=4.18%, avg=1956.84, stdev=78.50, samples=19 00:37:15.030 iops : min= 448, max= 524, avg=489.21, stdev=19.63, samples=19 00:37:15.030 lat (msec) : 20=0.92%, 50=98.76%, 100=0.33% 00:37:15.030 cpu : usr=99.13%, sys=0.56%, ctx=14, majf=0, minf=53 00:37:15.030 IO depths : 1=5.5%, 2=11.4%, 4=23.8%, 8=52.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:15.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 issued rwts: total=4904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.030 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.030 filename2: (groupid=0, jobs=1): err= 0: pid=3755798: Fri Dec 6 11:36:19 2024 00:37:15.030 read: IOPS=484, BW=1940KiB/s (1986kB/s)(19.0MiB/10009msec) 00:37:15.030 slat (nsec): min=4032, max=92661, avg=25085.37, stdev=14880.84 00:37:15.030 clat (usec): min=21300, max=48943, avg=32767.28, stdev=1634.54 00:37:15.030 lat (usec): min=21312, max=48979, avg=32792.36, stdev=1634.84 00:37:15.030 clat percentiles (usec): 00:37:15.030 | 1.00th=[24773], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:15.030 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:37:15.030 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:37:15.030 | 99.00th=[38536], 99.50th=[41157], 99.90th=[48497], 99.95th=[49021], 00:37:15.030 | 99.99th=[49021] 00:37:15.030 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1936.16, stdev=58.75, samples=19 00:37:15.030 iops : min= 448, max= 512, avg=484.00, stdev=14.79, samples=19 00:37:15.030 lat (msec) : 50=100.00% 00:37:15.030 cpu : usr=98.09%, sys=1.21%, ctx=213, majf=0, minf=49 00:37:15.030 IO depths : 1=5.8%, 2=11.9%, 4=24.6%, 8=51.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:15.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 issued rwts: total=4854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.030 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.030 filename2: (groupid=0, jobs=1): err= 0: pid=3755799: Fri Dec 6 11:36:19 2024 00:37:15.030 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10008msec) 00:37:15.030 slat (nsec): min=2892, max=30496, avg=6802.06, stdev=1766.61 00:37:15.030 clat (usec): min=2879, max=35207, avg=30082.33, stdev=5720.79 00:37:15.030 lat (usec): min=2886, max=35218, avg=30089.13, stdev=5720.75 00:37:15.030 clat percentiles (usec): 00:37:15.030 | 1.00th=[ 6128], 5.00th=[19792], 10.00th=[20841], 20.00th=[25560], 00:37:15.030 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:37:15.030 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:37:15.030 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:37:15.030 | 99.99th=[35390] 00:37:15.030 bw ( KiB/s): min= 1916, max= 2816, per=4.54%, avg=2128.11, stdev=206.01, samples=19 00:37:15.030 iops : min= 479, max= 704, avg=531.95, stdev=51.54, samples=19 00:37:15.030 lat (msec) : 4=0.30%, 10=1.34%, 20=4.84%, 50=93.52% 00:37:15.030 cpu : usr=99.00%, sys=0.73%, ctx=19, majf=0, minf=87 00:37:15.030 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:15.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.030 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.030 filename2: (groupid=0, jobs=1): err= 0: pid=3755800: Fri Dec 6 11:36:19 2024 00:37:15.030 read: IOPS=485, BW=1942KiB/s (1989kB/s)(19.0MiB/10009msec) 00:37:15.030 slat (usec): min=4, max=104, avg=22.21, stdev=18.74 00:37:15.030 clat (usec): min=13017, max=62230, avg=32836.55, stdev=5815.38 00:37:15.030 lat (usec): min=13035, max=62238, avg=32858.75, stdev=5815.41 00:37:15.030 clat percentiles (usec): 00:37:15.030 | 1.00th=[16909], 5.00th=[22414], 10.00th=[26346], 20.00th=[31589], 00:37:15.030 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:15.030 | 70.00th=[33424], 80.00th=[33817], 90.00th=[39584], 95.00th=[44303], 00:37:15.030 | 99.00th=[49546], 99.50th=[53740], 99.90th=[62129], 99.95th=[62129], 00:37:15.030 | 99.99th=[62129] 00:37:15.030 bw ( KiB/s): min= 1712, max= 2064, per=4.13%, avg=1933.00, stdev=84.06, samples=19 00:37:15.030 iops : min= 428, max= 516, avg=483.21, stdev=20.98, samples=19 00:37:15.030 lat (msec) : 20=1.87%, 50=97.24%, 100=0.88% 00:37:15.030 cpu : usr=98.99%, sys=0.66%, ctx=44, majf=0, minf=95 00:37:15.030 IO depths : 1=0.3%, 2=0.7%, 4=5.5%, 8=78.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:37:15.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 complete : 0=0.0%, 4=89.9%, 8=7.7%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.030 issued rwts: total=4860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.030 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.030 filename2: (groupid=0, jobs=1): err= 0: pid=3755801: Fri Dec 6 11:36:19 2024 00:37:15.030 read: IOPS=484, BW=1937KiB/s (1984kB/s)(18.9MiB/10010msec) 00:37:15.030 slat (usec): min=5, max=111, avg=27.67, stdev=17.46 00:37:15.030 clat (usec): min=9778, max=69305, avg=32762.18, stdev=2651.85 00:37:15.030 lat (usec): min=9785, max=69325, avg=32789.85, stdev=2652.06 00:37:15.031 clat percentiles (usec): 00:37:15.031 | 1.00th=[30016], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:15.031 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:37:15.031 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:15.031 | 99.00th=[34341], 99.50th=[34866], 99.90th=[69731], 99.95th=[69731], 00:37:15.031 | 99.99th=[69731] 00:37:15.031 bw ( KiB/s): min= 1664, max= 2048, per=4.11%, avg=1926.32, stdev=79.57, samples=19 00:37:15.031 iops : min= 416, max= 512, avg=481.58, stdev=19.89, samples=19 00:37:15.031 lat (msec) : 10=0.21%, 20=0.17%, 50=99.30%, 100=0.33% 00:37:15.031 cpu : usr=98.95%, sys=0.70%, ctx=60, majf=0, minf=65 00:37:15.031 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:15.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.031 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.031 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.031 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.031 filename2: (groupid=0, jobs=1): err= 0: pid=3755802: Fri Dec 6 11:36:19 2024 00:37:15.031 read: IOPS=486, BW=1946KiB/s (1992kB/s)(19.1MiB/10029msec) 00:37:15.031 slat (usec): min=5, max=109, avg=29.72, stdev=18.01 00:37:15.031 clat (usec): min=12050, max=49993, avg=32601.73, stdev=2018.16 00:37:15.031 lat (usec): min=12073, max=50028, avg=32631.45, stdev=2018.44 00:37:15.031 clat percentiles (usec): 00:37:15.031 | 1.00th=[22414], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:37:15.031 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:37:15.031 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:15.031 | 99.00th=[35390], 99.50th=[38011], 99.90th=[42730], 99.95th=[42730], 00:37:15.031 | 99.99th=[50070] 00:37:15.031 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1945.35, stdev=52.03, samples=20 00:37:15.031 iops : min= 480, max= 512, avg=486.30, stdev=12.93, samples=20 00:37:15.031 lat (msec) : 20=0.70%, 50=99.30% 00:37:15.031 cpu : usr=99.12%, sys=0.56%, ctx=14, majf=0, minf=56 00:37:15.031 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:15.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.031 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.031 issued rwts: total=4878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.031 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:15.031 00:37:15.031 Run status group 0 (all jobs): 00:37:15.031 READ: bw=45.7MiB/s (48.0MB/s), 1923KiB/s-2123KiB/s (1969kB/s-2174kB/s), io=460MiB (482MB), run=10007-10051msec 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 bdev_null0 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 [2024-12-06 11:36:20.089016] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 bdev_null1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:15.031 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:15.032 { 00:37:15.032 "params": { 00:37:15.032 "name": "Nvme$subsystem", 00:37:15.032 "trtype": "$TEST_TRANSPORT", 00:37:15.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:15.032 "adrfam": "ipv4", 00:37:15.032 "trsvcid": "$NVMF_PORT", 00:37:15.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:15.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:15.032 "hdgst": ${hdgst:-false}, 00:37:15.032 "ddgst": ${ddgst:-false} 00:37:15.032 }, 00:37:15.032 "method": "bdev_nvme_attach_controller" 00:37:15.032 } 00:37:15.032 EOF 00:37:15.032 )") 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:15.032 { 00:37:15.032 "params": { 00:37:15.032 "name": "Nvme$subsystem", 00:37:15.032 "trtype": "$TEST_TRANSPORT", 00:37:15.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:15.032 "adrfam": "ipv4", 00:37:15.032 "trsvcid": "$NVMF_PORT", 00:37:15.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:15.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:15.032 "hdgst": ${hdgst:-false}, 00:37:15.032 "ddgst": ${ddgst:-false} 00:37:15.032 }, 00:37:15.032 "method": "bdev_nvme_attach_controller" 00:37:15.032 } 00:37:15.032 EOF 00:37:15.032 )") 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:15.032 "params": { 00:37:15.032 "name": "Nvme0", 00:37:15.032 "trtype": "tcp", 00:37:15.032 "traddr": "10.0.0.2", 00:37:15.032 "adrfam": "ipv4", 00:37:15.032 "trsvcid": "4420", 00:37:15.032 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.032 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.032 "hdgst": false, 00:37:15.032 "ddgst": false 00:37:15.032 }, 00:37:15.032 "method": "bdev_nvme_attach_controller" 00:37:15.032 },{ 00:37:15.032 "params": { 00:37:15.032 "name": "Nvme1", 00:37:15.032 "trtype": "tcp", 00:37:15.032 "traddr": "10.0.0.2", 00:37:15.032 "adrfam": "ipv4", 00:37:15.032 "trsvcid": "4420", 00:37:15.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:15.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:15.032 "hdgst": false, 00:37:15.032 "ddgst": false 00:37:15.032 }, 00:37:15.032 "method": "bdev_nvme_attach_controller" 00:37:15.032 }' 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:15.032 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:15.032 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:15.032 ... 00:37:15.032 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:15.032 ... 00:37:15.032 fio-3.35 00:37:15.032 Starting 4 threads 00:37:20.318 00:37:20.318 filename0: (groupid=0, jobs=1): err= 0: pid=3758309: Fri Dec 6 11:36:26 2024 00:37:20.318 read: IOPS=2184, BW=17.1MiB/s (17.9MB/s)(85.4MiB/5003msec) 00:37:20.318 slat (nsec): min=5513, max=63586, avg=8870.78, stdev=3125.20 00:37:20.318 clat (usec): min=1267, max=5752, avg=3639.95, stdev=478.83 00:37:20.318 lat (usec): min=1293, max=5762, avg=3648.82, stdev=478.60 00:37:20.318 clat percentiles (usec): 00:37:20.318 | 1.00th=[ 2442], 5.00th=[ 2900], 10.00th=[ 2999], 20.00th=[ 3228], 00:37:20.318 | 30.00th=[ 3458], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3851], 00:37:20.318 | 70.00th=[ 3851], 80.00th=[ 3884], 90.00th=[ 4178], 95.00th=[ 4424], 00:37:20.318 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 5473], 99.95th=[ 5473], 00:37:20.318 | 99.99th=[ 5735] 00:37:20.318 bw ( KiB/s): min=16560, max=19712, per=26.48%, avg=17481.60, stdev=964.68, samples=10 00:37:20.318 iops : min= 2070, max= 2464, avg=2185.20, stdev=120.58, samples=10 00:37:20.318 lat (msec) : 2=0.55%, 4=87.73%, 10=11.72% 00:37:20.318 cpu : usr=97.58%, sys=2.16%, ctx=6, majf=0, minf=27 00:37:20.318 IO depths : 1=0.1%, 2=1.7%, 4=64.5%, 8=33.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:20.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.318 complete : 0=0.0%, 4=97.5%, 8=2.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.318 issued rwts: total=10931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:20.318 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:20.318 filename0: (groupid=0, jobs=1): err= 0: pid=3758310: Fri Dec 6 11:36:26 2024 00:37:20.318 read: IOPS=2004, BW=15.7MiB/s (16.4MB/s)(78.3MiB/5002msec) 00:37:20.318 slat (nsec): min=5491, max=44719, avg=7926.49, stdev=2611.27 00:37:20.318 clat (usec): min=2257, max=7942, avg=3970.32, stdev=565.31 00:37:20.318 lat (usec): min=2262, max=7967, avg=3978.25, stdev=565.07 00:37:20.318 clat percentiles (usec): 00:37:20.318 | 1.00th=[ 3195], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3621], 00:37:20.318 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3851], 00:37:20.318 | 70.00th=[ 3916], 80.00th=[ 4146], 90.00th=[ 4490], 95.00th=[ 5538], 00:37:20.318 | 99.00th=[ 6128], 99.50th=[ 6128], 99.90th=[ 6456], 99.95th=[ 6456], 00:37:20.318 | 99.99th=[ 7898] 00:37:20.318 bw ( KiB/s): min=15328, max=16592, per=24.30%, avg=16040.00, stdev=435.85, samples=10 00:37:20.318 iops : min= 1916, max= 2074, avg=2005.00, stdev=54.48, samples=10 00:37:20.318 lat (msec) : 4=74.31%, 10=25.69% 00:37:20.318 cpu : usr=97.14%, sys=2.58%, ctx=6, majf=0, minf=54 00:37:20.318 IO depths : 1=0.1%, 2=0.1%, 4=70.4%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:20.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.318 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.318 issued rwts: total=10026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:20.318 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:20.318 filename1: (groupid=0, jobs=1): err= 0: pid=3758311: Fri Dec 6 11:36:26 2024 00:37:20.318 read: IOPS=2083, BW=16.3MiB/s (17.1MB/s)(81.4MiB/5002msec) 00:37:20.318 slat (nsec): min=5502, max=47366, avg=8733.00, stdev=3073.67 00:37:20.318 clat (usec): min=1944, max=9369, avg=3815.76, stdev=605.97 00:37:20.318 lat (usec): min=1953, max=9399, avg=3824.50, stdev=605.96 00:37:20.318 clat percentiles (usec): 00:37:20.318 | 1.00th=[ 2671], 5.00th=[ 2999], 10.00th=[ 3195], 20.00th=[ 3458], 00:37:20.318 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3851], 00:37:20.318 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4424], 95.00th=[ 5276], 00:37:20.318 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 6325], 99.95th=[ 9110], 00:37:20.318 | 99.99th=[ 9110] 00:37:20.318 bw ( KiB/s): min=15376, max=17296, per=25.25%, avg=16667.20, stdev=655.24, samples=10 00:37:20.319 iops : min= 1922, max= 2162, avg=2083.40, stdev=81.91, samples=10 00:37:20.319 lat (msec) : 2=0.09%, 4=80.72%, 10=19.19% 00:37:20.319 cpu : usr=97.04%, sys=2.68%, ctx=8, majf=0, minf=40 00:37:20.319 IO depths : 1=0.1%, 2=1.2%, 4=70.6%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:20.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.319 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.319 issued rwts: total=10422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:20.319 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:20.319 filename1: (groupid=0, jobs=1): err= 0: pid=3758312: Fri Dec 6 11:36:26 2024 00:37:20.319 read: IOPS=1979, BW=15.5MiB/s (16.2MB/s)(77.4MiB/5002msec) 00:37:20.319 slat (nsec): min=5498, max=64897, avg=7631.58, stdev=3268.19 00:37:20.319 clat (usec): min=1723, max=6724, avg=4019.90, stdev=620.79 00:37:20.319 lat (usec): min=1728, max=6729, avg=4027.53, stdev=620.41 00:37:20.319 clat percentiles (usec): 00:37:20.319 | 1.00th=[ 3261], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3621], 00:37:20.319 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3851], 00:37:20.319 | 70.00th=[ 3949], 80.00th=[ 4178], 90.00th=[ 4686], 95.00th=[ 5669], 00:37:20.319 | 99.00th=[ 6128], 99.50th=[ 6194], 99.90th=[ 6390], 99.95th=[ 6390], 00:37:20.319 | 99.99th=[ 6718] 00:37:20.319 bw ( KiB/s): min=14944, max=16480, per=23.98%, avg=15831.80, stdev=539.01, samples=10 00:37:20.319 iops : min= 1868, max= 2060, avg=1978.90, stdev=67.43, samples=10 00:37:20.319 lat (msec) : 2=0.05%, 4=71.72%, 10=28.23% 00:37:20.319 cpu : usr=96.52%, sys=3.20%, ctx=7, majf=0, minf=51 00:37:20.319 IO depths : 1=0.1%, 2=0.1%, 4=73.6%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:20.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.319 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.319 issued rwts: total=9901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:20.319 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:20.319 00:37:20.319 Run status group 0 (all jobs): 00:37:20.319 READ: bw=64.5MiB/s (67.6MB/s), 15.5MiB/s-17.1MiB/s (16.2MB/s-17.9MB/s), io=323MiB (338MB), run=5002-5003msec 00:37:20.319 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:20.319 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:20.319 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:20.319 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:20.319 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.580 00:37:20.580 real 0m24.699s 00:37:20.580 user 5m13.512s 00:37:20.580 sys 0m4.421s 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:20.580 11:36:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:20.580 ************************************ 00:37:20.580 END TEST fio_dif_rand_params 00:37:20.580 ************************************ 00:37:20.580 11:36:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:20.580 11:36:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:20.580 11:36:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:20.580 11:36:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:20.580 ************************************ 00:37:20.580 START TEST fio_dif_digest 00:37:20.580 ************************************ 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:20.580 bdev_null0 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:20.580 [2024-12-06 11:36:26.652716] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:20.580 { 00:37:20.580 "params": { 00:37:20.580 "name": "Nvme$subsystem", 00:37:20.580 "trtype": "$TEST_TRANSPORT", 00:37:20.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:20.580 "adrfam": "ipv4", 00:37:20.580 "trsvcid": "$NVMF_PORT", 00:37:20.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:20.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:20.580 "hdgst": ${hdgst:-false}, 00:37:20.580 "ddgst": ${ddgst:-false} 00:37:20.580 }, 00:37:20.580 "method": "bdev_nvme_attach_controller" 00:37:20.580 } 00:37:20.580 EOF 00:37:20.580 )") 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:20.580 "params": { 00:37:20.580 "name": "Nvme0", 00:37:20.580 "trtype": "tcp", 00:37:20.580 "traddr": "10.0.0.2", 00:37:20.580 "adrfam": "ipv4", 00:37:20.580 "trsvcid": "4420", 00:37:20.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:20.580 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:20.580 "hdgst": true, 00:37:20.580 "ddgst": true 00:37:20.580 }, 00:37:20.580 "method": "bdev_nvme_attach_controller" 00:37:20.580 }' 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:20.580 11:36:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:21.176 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:21.176 ... 00:37:21.176 fio-3.35 00:37:21.176 Starting 3 threads 00:37:33.414 00:37:33.414 filename0: (groupid=0, jobs=1): err= 0: pid=3759568: Fri Dec 6 11:36:37 2024 00:37:33.414 read: IOPS=166, BW=20.8MiB/s (21.8MB/s)(208MiB/10042msec) 00:37:33.414 slat (nsec): min=5891, max=68668, avg=7699.49, stdev=2447.65 00:37:33.414 clat (usec): min=9118, max=96023, avg=18013.40, stdev=12060.26 00:37:33.414 lat (usec): min=9124, max=96030, avg=18021.10, stdev=12060.24 00:37:33.414 clat percentiles (usec): 00:37:33.414 | 1.00th=[11469], 5.00th=[12387], 10.00th=[12780], 20.00th=[13304], 00:37:33.414 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14222], 60.00th=[14615], 00:37:33.414 | 70.00th=[14877], 80.00th=[15533], 90.00th=[17171], 95.00th=[54264], 00:37:33.414 | 99.00th=[55837], 99.50th=[56361], 99.90th=[56886], 99.95th=[95945], 00:37:33.414 | 99.99th=[95945] 00:37:33.414 bw ( KiB/s): min=16128, max=25600, per=26.58%, avg=21324.80, stdev=2762.33, samples=20 00:37:33.414 iops : min= 126, max= 200, avg=166.60, stdev=21.58, samples=20 00:37:33.414 lat (msec) : 10=0.18%, 20=90.28%, 50=0.06%, 100=9.48% 00:37:33.414 cpu : usr=94.59%, sys=5.15%, ctx=28, majf=0, minf=94 00:37:33.414 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:33.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:33.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:33.414 issued rwts: total=1667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:33.414 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:33.414 filename0: (groupid=0, jobs=1): err= 0: pid=3759569: Fri Dec 6 11:36:37 2024 00:37:33.414 read: IOPS=229, BW=28.6MiB/s (30.0MB/s)(288MiB/10048msec) 00:37:33.414 slat (nsec): min=5912, max=70849, avg=7606.51, stdev=2471.28 00:37:33.414 clat (usec): min=6948, max=51424, avg=13061.14, stdev=2240.85 00:37:33.414 lat (usec): min=6955, max=51437, avg=13068.75, stdev=2240.99 00:37:33.414 clat percentiles (usec): 00:37:33.414 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10945], 00:37:33.414 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13566], 60.00th=[13960], 00:37:33.414 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15270], 95.00th=[15664], 00:37:33.414 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17695], 99.95th=[48497], 00:37:33.414 | 99.99th=[51643] 00:37:33.414 bw ( KiB/s): min=27392, max=32512, per=36.71%, avg=29452.80, stdev=1252.69, samples=20 00:37:33.414 iops : min= 214, max= 254, avg=230.10, stdev= 9.79, samples=20 00:37:33.414 lat (msec) : 10=10.64%, 20=89.27%, 50=0.04%, 100=0.04% 00:37:33.414 cpu : usr=93.99%, sys=5.75%, ctx=33, majf=0, minf=229 00:37:33.414 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:33.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:33.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:33.414 issued rwts: total=2303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:33.414 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:33.414 filename0: (groupid=0, jobs=1): err= 0: pid=3759570: Fri Dec 6 11:36:37 2024 00:37:33.414 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(291MiB/10045msec) 00:37:33.414 slat (nsec): min=5905, max=31639, avg=7594.46, stdev=1533.85 00:37:33.414 clat (usec): min=7246, max=51260, avg=12917.32, stdev=2771.47 00:37:33.414 lat (usec): min=7252, max=51267, avg=12924.92, stdev=2771.53 00:37:33.414 clat percentiles (usec): 00:37:33.414 | 1.00th=[ 8291], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10421], 00:37:33.414 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13304], 60.00th=[13829], 00:37:33.414 | 70.00th=[14222], 80.00th=[14746], 90.00th=[15401], 95.00th=[16057], 00:37:33.414 | 99.00th=[17171], 99.50th=[17695], 99.90th=[50070], 99.95th=[51119], 00:37:33.414 | 99.99th=[51119] 00:37:33.414 bw ( KiB/s): min=25088, max=33536, per=37.11%, avg=29772.80, stdev=2213.29, samples=20 00:37:33.414 iops : min= 196, max= 262, avg=232.60, stdev=17.29, samples=20 00:37:33.414 lat (msec) : 10=15.94%, 20=83.85%, 50=0.09%, 100=0.13% 00:37:33.414 cpu : usr=94.47%, sys=5.27%, ctx=21, majf=0, minf=167 00:37:33.414 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:33.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:33.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:33.414 issued rwts: total=2328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:33.414 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:33.414 00:37:33.414 Run status group 0 (all jobs): 00:37:33.414 READ: bw=78.3MiB/s (82.2MB/s), 20.8MiB/s-29.0MiB/s (21.8MB/s-30.4MB/s), io=787MiB (825MB), run=10042-10048msec 00:37:33.414 11:36:37 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.415 00:37:33.415 real 0m11.134s 00:37:33.415 user 0m39.619s 00:37:33.415 sys 0m1.960s 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:33.415 11:36:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:33.415 ************************************ 00:37:33.415 END TEST fio_dif_digest 00:37:33.415 ************************************ 00:37:33.415 11:36:37 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:33.415 11:36:37 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:33.415 11:36:37 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:33.415 11:36:37 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:33.415 11:36:37 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:33.415 11:36:37 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:33.415 11:36:37 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:33.415 11:36:37 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:33.415 rmmod nvme_tcp 00:37:33.415 rmmod nvme_fabrics 00:37:33.415 rmmod nvme_keyring 00:37:33.415 11:36:37 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:33.415 11:36:37 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:33.415 11:36:37 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:33.415 11:36:37 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3748795 ']' 00:37:33.415 11:36:37 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3748795 00:37:33.415 11:36:37 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3748795 ']' 00:37:33.415 11:36:37 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3748795 00:37:33.415 11:36:37 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:33.415 11:36:37 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:33.415 11:36:37 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3748795 00:37:33.415 11:36:37 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:33.415 11:36:37 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:33.415 11:36:37 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3748795' 00:37:33.415 killing process with pid 3748795 00:37:33.415 11:36:37 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3748795 00:37:33.415 11:36:37 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3748795 00:37:33.415 11:36:38 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:33.415 11:36:38 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:35.959 Waiting for block devices as requested 00:37:35.959 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:35.959 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:35.959 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:35.959 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:35.959 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:36.219 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:36.219 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:36.219 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:36.219 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:36.480 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:36.480 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:36.740 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:36.740 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:36.740 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:36.740 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:37.000 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:37.000 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:37.260 11:36:43 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:37.260 11:36:43 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:37.260 11:36:43 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:37.260 11:36:43 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:37.260 11:36:43 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:37.260 11:36:43 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:37.260 11:36:43 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:37.260 11:36:43 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:37.260 11:36:43 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.260 11:36:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:37.260 11:36:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.805 11:36:45 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:39.805 00:37:39.805 real 1m19.181s 00:37:39.805 user 7m57.610s 00:37:39.805 sys 0m22.339s 00:37:39.805 11:36:45 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.805 11:36:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:39.805 ************************************ 00:37:39.805 END TEST nvmf_dif 00:37:39.805 ************************************ 00:37:39.805 11:36:45 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:39.805 11:36:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:39.805 11:36:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.805 11:36:45 -- common/autotest_common.sh@10 -- # set +x 00:37:39.805 ************************************ 00:37:39.805 START TEST nvmf_abort_qd_sizes 00:37:39.805 ************************************ 00:37:39.805 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:39.805 * Looking for test storage... 00:37:39.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:39.805 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:39.805 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:37:39.805 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:39.805 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:39.805 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:39.805 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:39.805 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:39.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.806 --rc genhtml_branch_coverage=1 00:37:39.806 --rc genhtml_function_coverage=1 00:37:39.806 --rc genhtml_legend=1 00:37:39.806 --rc geninfo_all_blocks=1 00:37:39.806 --rc geninfo_unexecuted_blocks=1 00:37:39.806 00:37:39.806 ' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:39.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.806 --rc genhtml_branch_coverage=1 00:37:39.806 --rc genhtml_function_coverage=1 00:37:39.806 --rc genhtml_legend=1 00:37:39.806 --rc geninfo_all_blocks=1 00:37:39.806 --rc geninfo_unexecuted_blocks=1 00:37:39.806 00:37:39.806 ' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:39.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.806 --rc genhtml_branch_coverage=1 00:37:39.806 --rc genhtml_function_coverage=1 00:37:39.806 --rc genhtml_legend=1 00:37:39.806 --rc geninfo_all_blocks=1 00:37:39.806 --rc geninfo_unexecuted_blocks=1 00:37:39.806 00:37:39.806 ' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:39.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.806 --rc genhtml_branch_coverage=1 00:37:39.806 --rc genhtml_function_coverage=1 00:37:39.806 --rc genhtml_legend=1 00:37:39.806 --rc geninfo_all_blocks=1 00:37:39.806 --rc geninfo_unexecuted_blocks=1 00:37:39.806 00:37:39.806 ' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:39.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:39.806 11:36:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:47.956 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:47.957 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:47.957 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:47.957 Found net devices under 0000:31:00.0: cvl_0_0 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:47.957 Found net devices under 0000:31:00.1: cvl_0_1 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:47.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:47.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.724 ms 00:37:47.957 00:37:47.957 --- 10.0.0.2 ping statistics --- 00:37:47.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.957 rtt min/avg/max/mdev = 0.724/0.724/0.724/0.000 ms 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:47.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:47.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:37:47.957 00:37:47.957 --- 10.0.0.1 ping statistics --- 00:37:47.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.957 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:47.957 11:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:52.166 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:52.166 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3770003 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3770003 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3770003 ']' 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:52.428 11:36:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:52.428 [2024-12-06 11:36:58.579931] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:37:52.428 [2024-12-06 11:36:58.579998] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:52.689 [2024-12-06 11:36:58.676355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:52.689 [2024-12-06 11:36:58.720142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:52.689 [2024-12-06 11:36:58.720181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:52.689 [2024-12-06 11:36:58.720190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:52.689 [2024-12-06 11:36:58.720197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:52.689 [2024-12-06 11:36:58.720203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:52.689 [2024-12-06 11:36:58.721993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:52.689 [2024-12-06 11:36:58.722255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:52.689 [2024-12-06 11:36:58.722527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:52.689 [2024-12-06 11:36:58.722529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.260 11:36:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:53.260 11:36:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:53.260 11:36:59 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:53.260 11:36:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:53.260 11:36:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:53.521 11:36:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:53.521 ************************************ 00:37:53.521 START TEST spdk_target_abort 00:37:53.521 ************************************ 00:37:53.521 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:53.522 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:53.522 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:53.522 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.522 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:53.784 spdk_targetn1 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:53.784 [2024-12-06 11:36:59.799934] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:53.784 [2024-12-06 11:36:59.856287] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:53.784 11:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:54.045 [2024-12-06 11:37:00.051215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:672 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:54.045 [2024-12-06 11:37:00.051245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0055 p:1 m:0 dnr:0 00:37:54.045 [2024-12-06 11:37:00.076229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1576 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:54.045 [2024-12-06 11:37:00.076247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00c7 p:1 m:0 dnr:0 00:37:54.045 [2024-12-06 11:37:00.097241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2176 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:54.046 [2024-12-06 11:37:00.097257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:54.046 [2024-12-06 11:37:00.098007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2248 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:54.046 [2024-12-06 11:37:00.098020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:54.046 [2024-12-06 11:37:00.107523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2648 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:54.046 [2024-12-06 11:37:00.107539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:54.046 [2024-12-06 11:37:00.119353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2976 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:54.046 [2024-12-06 11:37:00.119369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:54.046 [2024-12-06 11:37:00.128811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3384 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:37:54.046 [2024-12-06 11:37:00.128826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00a8 p:0 m:0 dnr:0 00:37:54.046 [2024-12-06 11:37:00.129973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3448 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:37:54.046 [2024-12-06 11:37:00.129986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00b0 p:0 m:0 dnr:0 00:37:54.046 [2024-12-06 11:37:00.136267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3616 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:37:54.046 [2024-12-06 11:37:00.136282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00c5 p:0 m:0 dnr:0 00:37:54.046 [2024-12-06 11:37:00.138171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3744 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:54.046 [2024-12-06 11:37:00.138184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00d6 p:0 m:0 dnr:0 00:37:57.556 Initializing NVMe Controllers 00:37:57.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:57.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:57.556 Initialization complete. Launching workers. 00:37:57.556 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12982, failed: 10 00:37:57.556 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3440, failed to submit 9552 00:37:57.556 success 770, unsuccessful 2670, failed 0 00:37:57.556 11:37:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:57.556 11:37:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:57.556 [2024-12-06 11:37:03.410044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:624 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:37:57.556 [2024-12-06 11:37:03.410077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:37:57.556 [2024-12-06 11:37:03.434037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:1152 len:8 PRP1 0x200004e58000 PRP2 0x0 00:37:57.556 [2024-12-06 11:37:03.434063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:009a p:1 m:0 dnr:0 00:37:57.556 [2024-12-06 11:37:03.442038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:1456 len:8 PRP1 0x200004e54000 PRP2 0x0 00:37:57.556 [2024-12-06 11:37:03.442060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:00b7 p:1 m:0 dnr:0 00:37:57.556 [2024-12-06 11:37:03.529913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:3248 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:57.556 [2024-12-06 11:37:03.529939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:009a p:0 m:0 dnr:0 00:37:58.498 [2024-12-06 11:37:04.660869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:29360 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:37:58.498 [2024-12-06 11:37:04.660900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:005e p:1 m:0 dnr:0 00:37:59.069 [2024-12-06 11:37:05.102141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:39240 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:37:59.069 [2024-12-06 11:37:05.102170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:00.450 Initializing NVMe Controllers 00:38:00.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:00.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:00.450 Initialization complete. Launching workers. 00:38:00.450 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8509, failed: 6 00:38:00.450 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 7296 00:38:00.450 success 305, unsuccessful 914, failed 0 00:38:00.450 11:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:00.450 11:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:02.360 [2024-12-06 11:37:08.000478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:187 nsid:1 lba:138288 len:8 PRP1 0x200004b08000 PRP2 0x0 00:38:02.360 [2024-12-06 11:37:08.000507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:187 cdw0:0 sqhd:004c p:1 m:0 dnr:0 00:38:02.360 [2024-12-06 11:37:08.200286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:163 nsid:1 lba:160400 len:8 PRP1 0x200004afa000 PRP2 0x0 00:38:02.360 [2024-12-06 11:37:08.200305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:163 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:02.930 [2024-12-06 11:37:08.962354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:184 nsid:1 lba:245200 len:8 PRP1 0x200004ace000 PRP2 0x0 00:38:02.930 [2024-12-06 11:37:08.962380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:184 cdw0:0 sqhd:0084 p:1 m:0 dnr:0 00:38:03.867 Initializing NVMe Controllers 00:38:03.867 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:03.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:03.867 Initialization complete. Launching workers. 00:38:03.867 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41907, failed: 3 00:38:03.867 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2619, failed to submit 39291 00:38:03.867 success 564, unsuccessful 2055, failed 0 00:38:03.867 11:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:03.867 11:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.867 11:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:03.867 11:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.867 11:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:03.867 11:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.867 11:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3770003 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3770003 ']' 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3770003 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3770003 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3770003' 00:38:05.780 killing process with pid 3770003 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3770003 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3770003 00:38:05.780 00:38:05.780 real 0m12.360s 00:38:05.780 user 0m50.386s 00:38:05.780 sys 0m1.928s 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:05.780 ************************************ 00:38:05.780 END TEST spdk_target_abort 00:38:05.780 ************************************ 00:38:05.780 11:37:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:05.780 11:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:05.780 11:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:05.780 11:37:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:05.780 ************************************ 00:38:05.780 START TEST kernel_target_abort 00:38:05.780 ************************************ 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:05.780 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:06.041 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:06.041 11:37:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:10.245 Waiting for block devices as requested 00:38:10.245 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:10.245 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:10.245 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:10.245 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:10.245 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:10.245 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:10.245 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:10.245 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:10.245 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:10.504 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:10.504 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:10.504 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:10.504 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:10.764 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:10.764 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:10.764 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:10.764 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:11.334 No valid GPT data, bailing 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:38:11.334 00:38:11.334 Discovery Log Number of Records 2, Generation counter 2 00:38:11.334 =====Discovery Log Entry 0====== 00:38:11.334 trtype: tcp 00:38:11.334 adrfam: ipv4 00:38:11.334 subtype: current discovery subsystem 00:38:11.334 treq: not specified, sq flow control disable supported 00:38:11.334 portid: 1 00:38:11.334 trsvcid: 4420 00:38:11.334 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:11.334 traddr: 10.0.0.1 00:38:11.334 eflags: none 00:38:11.334 sectype: none 00:38:11.334 =====Discovery Log Entry 1====== 00:38:11.334 trtype: tcp 00:38:11.334 adrfam: ipv4 00:38:11.334 subtype: nvme subsystem 00:38:11.334 treq: not specified, sq flow control disable supported 00:38:11.334 portid: 1 00:38:11.334 trsvcid: 4420 00:38:11.334 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:11.334 traddr: 10.0.0.1 00:38:11.334 eflags: none 00:38:11.334 sectype: none 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:11.334 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:11.335 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:11.335 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:11.335 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:11.335 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:11.335 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:11.335 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:11.335 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:11.335 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:11.335 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:11.335 11:37:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:14.633 Initializing NVMe Controllers 00:38:14.633 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:14.633 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:14.633 Initialization complete. Launching workers. 00:38:14.633 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67007, failed: 0 00:38:14.633 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67007, failed to submit 0 00:38:14.633 success 0, unsuccessful 67007, failed 0 00:38:14.633 11:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:14.633 11:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:17.929 Initializing NVMe Controllers 00:38:17.929 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:17.929 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:17.929 Initialization complete. Launching workers. 00:38:17.929 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107645, failed: 0 00:38:17.929 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27154, failed to submit 80491 00:38:17.929 success 0, unsuccessful 27154, failed 0 00:38:17.929 11:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:17.929 11:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:21.229 Initializing NVMe Controllers 00:38:21.229 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:21.229 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:21.229 Initialization complete. Launching workers. 00:38:21.229 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102036, failed: 0 00:38:21.229 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25514, failed to submit 76522 00:38:21.229 success 0, unsuccessful 25514, failed 0 00:38:21.229 11:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:21.229 11:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:21.229 11:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:21.229 11:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:21.229 11:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:21.229 11:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:21.229 11:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:21.229 11:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:21.229 11:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:21.229 11:37:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:24.528 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:24.528 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:24.528 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:24.528 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:24.528 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:24.528 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:24.528 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:24.528 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:24.528 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:24.789 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:24.789 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:24.789 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:24.789 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:24.789 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:24.789 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:24.789 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:26.702 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:26.702 00:38:26.702 real 0m20.930s 00:38:26.702 user 0m10.088s 00:38:26.702 sys 0m6.643s 00:38:26.702 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:26.702 11:37:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:26.702 ************************************ 00:38:26.702 END TEST kernel_target_abort 00:38:26.702 ************************************ 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:26.963 rmmod nvme_tcp 00:38:26.963 rmmod nvme_fabrics 00:38:26.963 rmmod nvme_keyring 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3770003 ']' 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3770003 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3770003 ']' 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3770003 00:38:26.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3770003) - No such process 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3770003 is not found' 00:38:26.963 Process with pid 3770003 is not found 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:26.963 11:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:31.168 Waiting for block devices as requested 00:38:31.168 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:31.168 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:31.168 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:31.168 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:31.168 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:31.168 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:31.168 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:31.168 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:31.429 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:31.429 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:31.690 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:31.690 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:31.690 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:31.951 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:31.951 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:31.951 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:31.951 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:32.212 11:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:32.212 11:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:32.212 11:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:32.212 11:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:32.212 11:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:32.212 11:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:32.474 11:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:32.474 11:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:32.474 11:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.474 11:37:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:32.474 11:37:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:34.392 11:37:40 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:34.392 00:38:34.392 real 0m54.958s 00:38:34.392 user 1m6.477s 00:38:34.392 sys 0m20.909s 00:38:34.392 11:37:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:34.392 11:37:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:34.392 ************************************ 00:38:34.392 END TEST nvmf_abort_qd_sizes 00:38:34.392 ************************************ 00:38:34.392 11:37:40 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:34.392 11:37:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:34.392 11:37:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:34.392 11:37:40 -- common/autotest_common.sh@10 -- # set +x 00:38:34.392 ************************************ 00:38:34.392 START TEST keyring_file 00:38:34.392 ************************************ 00:38:34.392 11:37:40 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:34.654 * Looking for test storage... 00:38:34.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:34.654 11:37:40 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:34.654 11:37:40 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:38:34.654 11:37:40 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:34.654 11:37:40 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:34.654 11:37:40 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:34.654 11:37:40 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:34.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.654 --rc genhtml_branch_coverage=1 00:38:34.654 --rc genhtml_function_coverage=1 00:38:34.654 --rc genhtml_legend=1 00:38:34.654 --rc geninfo_all_blocks=1 00:38:34.654 --rc geninfo_unexecuted_blocks=1 00:38:34.654 00:38:34.654 ' 00:38:34.654 11:37:40 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:34.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.654 --rc genhtml_branch_coverage=1 00:38:34.654 --rc genhtml_function_coverage=1 00:38:34.654 --rc genhtml_legend=1 00:38:34.654 --rc geninfo_all_blocks=1 00:38:34.654 --rc geninfo_unexecuted_blocks=1 00:38:34.654 00:38:34.654 ' 00:38:34.654 11:37:40 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:34.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.654 --rc genhtml_branch_coverage=1 00:38:34.654 --rc genhtml_function_coverage=1 00:38:34.654 --rc genhtml_legend=1 00:38:34.654 --rc geninfo_all_blocks=1 00:38:34.654 --rc geninfo_unexecuted_blocks=1 00:38:34.654 00:38:34.654 ' 00:38:34.654 11:37:40 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:34.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:34.654 --rc genhtml_branch_coverage=1 00:38:34.654 --rc genhtml_function_coverage=1 00:38:34.654 --rc genhtml_legend=1 00:38:34.654 --rc geninfo_all_blocks=1 00:38:34.654 --rc geninfo_unexecuted_blocks=1 00:38:34.654 00:38:34.654 ' 00:38:34.654 11:37:40 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:34.654 11:37:40 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:34.654 11:37:40 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:34.654 11:37:40 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:34.654 11:37:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.654 11:37:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.654 11:37:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.654 11:37:40 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:34.655 11:37:40 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:34.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:34.655 11:37:40 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:34.655 11:37:40 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:34.655 11:37:40 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:34.655 11:37:40 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:34.655 11:37:40 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:34.655 11:37:40 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:34.655 11:37:40 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:34.655 11:37:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:34.655 11:37:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:34.655 11:37:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:34.655 11:37:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:34.655 11:37:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:34.655 11:37:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LfVWdUkUUO 00:38:34.655 11:37:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:34.655 11:37:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:34.655 11:37:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LfVWdUkUUO 00:38:34.655 11:37:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LfVWdUkUUO 00:38:34.655 11:37:40 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.LfVWdUkUUO 00:38:34.655 11:37:40 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:34.655 11:37:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:34.916 11:37:40 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:34.916 11:37:40 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:34.916 11:37:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:34.916 11:37:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:34.916 11:37:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cvBYwiTYEo 00:38:34.916 11:37:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:34.916 11:37:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:34.916 11:37:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:34.916 11:37:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:34.916 11:37:40 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:34.916 11:37:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:34.916 11:37:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:34.916 11:37:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cvBYwiTYEo 00:38:34.916 11:37:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cvBYwiTYEo 00:38:34.916 11:37:40 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.cvBYwiTYEo 00:38:34.916 11:37:40 keyring_file -- keyring/file.sh@30 -- # tgtpid=3780783 00:38:34.916 11:37:40 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3780783 00:38:34.916 11:37:40 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:34.916 11:37:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3780783 ']' 00:38:34.916 11:37:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:34.916 11:37:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:34.916 11:37:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:34.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:34.916 11:37:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:34.916 11:37:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:34.916 [2024-12-06 11:37:40.939266] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:38:34.916 [2024-12-06 11:37:40.939329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780783 ] 00:38:34.916 [2024-12-06 11:37:41.018415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.916 [2024-12-06 11:37:41.056170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:35.859 11:37:41 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:35.859 [2024-12-06 11:37:41.717775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:35.859 null0 00:38:35.859 [2024-12-06 11:37:41.749824] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:35.859 [2024-12-06 11:37:41.750120] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.859 11:37:41 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:35.859 [2024-12-06 11:37:41.781904] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:35.859 request: 00:38:35.859 { 00:38:35.859 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:35.859 "secure_channel": false, 00:38:35.859 "listen_address": { 00:38:35.859 "trtype": "tcp", 00:38:35.859 "traddr": "127.0.0.1", 00:38:35.859 "trsvcid": "4420" 00:38:35.859 }, 00:38:35.859 "method": "nvmf_subsystem_add_listener", 00:38:35.859 "req_id": 1 00:38:35.859 } 00:38:35.859 Got JSON-RPC error response 00:38:35.859 response: 00:38:35.859 { 00:38:35.859 "code": -32602, 00:38:35.859 "message": "Invalid parameters" 00:38:35.859 } 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:35.859 11:37:41 keyring_file -- keyring/file.sh@47 -- # bperfpid=3780885 00:38:35.859 11:37:41 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3780885 /var/tmp/bperf.sock 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3780885 ']' 00:38:35.859 11:37:41 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:35.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:35.859 11:37:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:35.859 [2024-12-06 11:37:41.840211] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:38:35.859 [2024-12-06 11:37:41.840263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780885 ] 00:38:35.859 [2024-12-06 11:37:41.935611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.859 [2024-12-06 11:37:41.971774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:36.802 11:37:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:36.802 11:37:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:36.802 11:37:42 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LfVWdUkUUO 00:38:36.802 11:37:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LfVWdUkUUO 00:38:36.802 11:37:42 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cvBYwiTYEo 00:38:36.802 11:37:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cvBYwiTYEo 00:38:36.802 11:37:42 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:36.802 11:37:42 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:36.802 11:37:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:36.802 11:37:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:36.802 11:37:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:37.063 11:37:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.LfVWdUkUUO == \/\t\m\p\/\t\m\p\.\L\f\V\W\d\U\k\U\U\O ]] 00:38:37.063 11:37:43 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:37.063 11:37:43 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:37.063 11:37:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:37.063 11:37:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.063 11:37:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.324 11:37:43 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.cvBYwiTYEo == \/\t\m\p\/\t\m\p\.\c\v\B\Y\w\i\T\Y\E\o ]] 00:38:37.324 11:37:43 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:37.324 11:37:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:37.324 11:37:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:37.324 11:37:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.324 11:37:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:37.324 11:37:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.324 11:37:43 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:37.324 11:37:43 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:37.324 11:37:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:37.324 11:37:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:37.324 11:37:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.324 11:37:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.324 11:37:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:37.585 11:37:43 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:37.585 11:37:43 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:37.585 11:37:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:37.846 [2024-12-06 11:37:43.811304] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:37.846 nvme0n1 00:38:37.846 11:37:43 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:37.846 11:37:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:37.846 11:37:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:37.846 11:37:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.846 11:37:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:37.846 11:37:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.107 11:37:44 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:38.107 11:37:44 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:38.107 11:37:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:38.107 11:37:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.107 11:37:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.107 11:37:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.107 11:37:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:38.107 11:37:44 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:38.107 11:37:44 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:38.368 Running I/O for 1 seconds... 00:38:39.310 14823.00 IOPS, 57.90 MiB/s 00:38:39.310 Latency(us) 00:38:39.310 [2024-12-06T10:37:45.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:39.310 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:39.310 nvme0n1 : 1.01 14873.30 58.10 0.00 0.00 8586.94 3467.95 12888.75 00:38:39.310 [2024-12-06T10:37:45.477Z] =================================================================================================================== 00:38:39.310 [2024-12-06T10:37:45.477Z] Total : 14873.30 58.10 0.00 0.00 8586.94 3467.95 12888.75 00:38:39.310 { 00:38:39.310 "results": [ 00:38:39.310 { 00:38:39.310 "job": "nvme0n1", 00:38:39.310 "core_mask": "0x2", 00:38:39.310 "workload": "randrw", 00:38:39.310 "percentage": 50, 00:38:39.310 "status": "finished", 00:38:39.310 "queue_depth": 128, 00:38:39.310 "io_size": 4096, 00:38:39.310 "runtime": 1.005224, 00:38:39.310 "iops": 14873.301871025762, 00:38:39.310 "mibps": 58.09883543369438, 00:38:39.310 "io_failed": 0, 00:38:39.310 "io_timeout": 0, 00:38:39.310 "avg_latency_us": 8586.937118141484, 00:38:39.310 "min_latency_us": 3467.9466666666667, 00:38:39.310 "max_latency_us": 12888.746666666666 00:38:39.310 } 00:38:39.310 ], 00:38:39.310 "core_count": 1 00:38:39.310 } 00:38:39.310 11:37:45 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:39.310 11:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:39.570 11:37:45 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:39.570 11:37:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:39.570 11:37:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:39.570 11:37:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:39.570 11:37:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:39.570 11:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.570 11:37:45 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:39.570 11:37:45 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:39.830 11:37:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:39.830 11:37:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:39.830 11:37:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:39.830 11:37:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:39.830 11:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.830 11:37:45 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:39.830 11:37:45 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:39.830 11:37:45 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:39.830 11:37:45 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:39.830 11:37:45 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:39.830 11:37:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:39.830 11:37:45 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:39.830 11:37:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:39.830 11:37:45 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:39.830 11:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:40.090 [2024-12-06 11:37:46.064735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:40.090 [2024-12-06 11:37:46.065475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e1630 (107): Transport endpoint is not connected 00:38:40.090 [2024-12-06 11:37:46.066470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e1630 (9): Bad file descriptor 00:38:40.090 [2024-12-06 11:37:46.067472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:40.090 [2024-12-06 11:37:46.067480] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:40.090 [2024-12-06 11:37:46.067486] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:40.090 [2024-12-06 11:37:46.067493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:40.090 request: 00:38:40.090 { 00:38:40.090 "name": "nvme0", 00:38:40.090 "trtype": "tcp", 00:38:40.090 "traddr": "127.0.0.1", 00:38:40.090 "adrfam": "ipv4", 00:38:40.090 "trsvcid": "4420", 00:38:40.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:40.090 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:40.090 "prchk_reftag": false, 00:38:40.090 "prchk_guard": false, 00:38:40.090 "hdgst": false, 00:38:40.090 "ddgst": false, 00:38:40.090 "psk": "key1", 00:38:40.090 "allow_unrecognized_csi": false, 00:38:40.090 "method": "bdev_nvme_attach_controller", 00:38:40.090 "req_id": 1 00:38:40.090 } 00:38:40.090 Got JSON-RPC error response 00:38:40.090 response: 00:38:40.090 { 00:38:40.090 "code": -5, 00:38:40.090 "message": "Input/output error" 00:38:40.090 } 00:38:40.090 11:37:46 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:40.091 11:37:46 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:40.091 11:37:46 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:40.091 11:37:46 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:40.091 11:37:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:40.091 11:37:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:40.091 11:37:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.091 11:37:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.091 11:37:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:40.091 11:37:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.351 11:37:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:40.351 11:37:46 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:40.351 11:37:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:40.351 11:37:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.351 11:37:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.351 11:37:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:40.351 11:37:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.351 11:37:46 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:40.351 11:37:46 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:40.351 11:37:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:40.612 11:37:46 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:40.612 11:37:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:40.873 11:37:46 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:40.873 11:37:46 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:40.873 11:37:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.873 11:37:46 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:40.873 11:37:46 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.LfVWdUkUUO 00:38:40.873 11:37:46 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.LfVWdUkUUO 00:38:40.873 11:37:46 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:40.873 11:37:46 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.LfVWdUkUUO 00:38:40.873 11:37:46 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:40.873 11:37:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:40.873 11:37:46 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:40.873 11:37:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:40.873 11:37:46 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LfVWdUkUUO 00:38:40.873 11:37:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LfVWdUkUUO 00:38:41.133 [2024-12-06 11:37:47.116196] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.LfVWdUkUUO': 0100660 00:38:41.133 [2024-12-06 11:37:47.116216] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:41.133 request: 00:38:41.133 { 00:38:41.133 "name": "key0", 00:38:41.133 "path": "/tmp/tmp.LfVWdUkUUO", 00:38:41.133 "method": "keyring_file_add_key", 00:38:41.133 "req_id": 1 00:38:41.133 } 00:38:41.133 Got JSON-RPC error response 00:38:41.133 response: 00:38:41.133 { 00:38:41.133 "code": -1, 00:38:41.133 "message": "Operation not permitted" 00:38:41.133 } 00:38:41.133 11:37:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:41.133 11:37:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:41.133 11:37:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:41.133 11:37:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:41.133 11:37:47 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.LfVWdUkUUO 00:38:41.133 11:37:47 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LfVWdUkUUO 00:38:41.133 11:37:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LfVWdUkUUO 00:38:41.393 11:37:47 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.LfVWdUkUUO 00:38:41.393 11:37:47 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:41.393 11:37:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:41.393 11:37:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:41.393 11:37:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:41.393 11:37:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.393 11:37:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:41.393 11:37:47 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:41.393 11:37:47 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:41.393 11:37:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:41.393 11:37:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:41.393 11:37:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:41.393 11:37:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.393 11:37:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:41.393 11:37:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.393 11:37:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:41.393 11:37:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:41.653 [2024-12-06 11:37:47.641528] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.LfVWdUkUUO': No such file or directory 00:38:41.653 [2024-12-06 11:37:47.641541] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:41.653 [2024-12-06 11:37:47.641554] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:41.653 [2024-12-06 11:37:47.641560] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:41.653 [2024-12-06 11:37:47.641569] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:41.653 [2024-12-06 11:37:47.641574] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:41.653 request: 00:38:41.653 { 00:38:41.653 "name": "nvme0", 00:38:41.653 "trtype": "tcp", 00:38:41.653 "traddr": "127.0.0.1", 00:38:41.653 "adrfam": "ipv4", 00:38:41.653 "trsvcid": "4420", 00:38:41.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:41.653 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:41.653 "prchk_reftag": false, 00:38:41.653 "prchk_guard": false, 00:38:41.653 "hdgst": false, 00:38:41.653 "ddgst": false, 00:38:41.653 "psk": "key0", 00:38:41.653 "allow_unrecognized_csi": false, 00:38:41.653 "method": "bdev_nvme_attach_controller", 00:38:41.653 "req_id": 1 00:38:41.653 } 00:38:41.653 Got JSON-RPC error response 00:38:41.653 response: 00:38:41.653 { 00:38:41.653 "code": -19, 00:38:41.654 "message": "No such device" 00:38:41.654 } 00:38:41.654 11:37:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:41.654 11:37:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:41.654 11:37:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:41.654 11:37:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:41.654 11:37:47 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:41.654 11:37:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:41.914 11:37:47 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:41.914 11:37:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:41.914 11:37:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:41.914 11:37:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:41.914 11:37:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:41.914 11:37:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:41.914 11:37:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WwyeNmBRcn 00:38:41.914 11:37:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:41.914 11:37:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:41.914 11:37:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:41.914 11:37:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:41.914 11:37:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:41.914 11:37:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:41.914 11:37:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:41.914 11:37:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WwyeNmBRcn 00:38:41.914 11:37:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WwyeNmBRcn 00:38:41.914 11:37:47 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.WwyeNmBRcn 00:38:41.914 11:37:47 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WwyeNmBRcn 00:38:41.914 11:37:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WwyeNmBRcn 00:38:41.914 11:37:48 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:41.914 11:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.174 nvme0n1 00:38:42.174 11:37:48 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:42.174 11:37:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:42.174 11:37:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:42.174 11:37:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.174 11:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.174 11:37:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:42.434 11:37:48 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:42.434 11:37:48 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:42.434 11:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:42.695 11:37:48 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:42.695 11:37:48 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:42.695 11:37:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.695 11:37:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:42.695 11:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.695 11:37:48 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:42.695 11:37:48 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:42.695 11:37:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:42.695 11:37:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:42.695 11:37:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.695 11:37:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:42.695 11:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.955 11:37:49 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:42.955 11:37:49 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:42.955 11:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:43.217 11:37:49 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:43.217 11:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.217 11:37:49 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:43.217 11:37:49 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:43.217 11:37:49 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WwyeNmBRcn 00:38:43.217 11:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WwyeNmBRcn 00:38:43.477 11:37:49 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cvBYwiTYEo 00:38:43.477 11:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cvBYwiTYEo 00:38:43.737 11:37:49 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:43.737 11:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:43.997 nvme0n1 00:38:43.997 11:37:49 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:43.997 11:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:44.258 11:37:50 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:44.258 "subsystems": [ 00:38:44.258 { 00:38:44.258 "subsystem": "keyring", 00:38:44.258 "config": [ 00:38:44.258 { 00:38:44.258 "method": "keyring_file_add_key", 00:38:44.258 "params": { 00:38:44.258 "name": "key0", 00:38:44.258 "path": "/tmp/tmp.WwyeNmBRcn" 00:38:44.258 } 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "method": "keyring_file_add_key", 00:38:44.258 "params": { 00:38:44.258 "name": "key1", 00:38:44.258 "path": "/tmp/tmp.cvBYwiTYEo" 00:38:44.258 } 00:38:44.258 } 00:38:44.258 ] 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "subsystem": "iobuf", 00:38:44.258 "config": [ 00:38:44.258 { 00:38:44.258 "method": "iobuf_set_options", 00:38:44.258 "params": { 00:38:44.258 "small_pool_count": 8192, 00:38:44.258 "large_pool_count": 1024, 00:38:44.258 "small_bufsize": 8192, 00:38:44.258 "large_bufsize": 135168, 00:38:44.258 "enable_numa": false 00:38:44.258 } 00:38:44.258 } 00:38:44.258 ] 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "subsystem": "sock", 00:38:44.258 "config": [ 00:38:44.258 { 00:38:44.258 "method": "sock_set_default_impl", 00:38:44.258 "params": { 00:38:44.258 "impl_name": "posix" 00:38:44.258 } 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "method": "sock_impl_set_options", 00:38:44.258 "params": { 00:38:44.258 "impl_name": "ssl", 00:38:44.258 "recv_buf_size": 4096, 00:38:44.258 "send_buf_size": 4096, 00:38:44.258 "enable_recv_pipe": true, 00:38:44.258 "enable_quickack": false, 00:38:44.258 "enable_placement_id": 0, 00:38:44.258 "enable_zerocopy_send_server": true, 00:38:44.258 "enable_zerocopy_send_client": false, 00:38:44.258 "zerocopy_threshold": 0, 00:38:44.258 "tls_version": 0, 00:38:44.258 "enable_ktls": false 00:38:44.258 } 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "method": "sock_impl_set_options", 00:38:44.258 "params": { 00:38:44.258 "impl_name": "posix", 00:38:44.258 "recv_buf_size": 2097152, 00:38:44.258 "send_buf_size": 2097152, 00:38:44.258 "enable_recv_pipe": true, 00:38:44.258 "enable_quickack": false, 00:38:44.258 "enable_placement_id": 0, 00:38:44.258 "enable_zerocopy_send_server": true, 00:38:44.258 "enable_zerocopy_send_client": false, 00:38:44.258 "zerocopy_threshold": 0, 00:38:44.258 "tls_version": 0, 00:38:44.258 "enable_ktls": false 00:38:44.258 } 00:38:44.258 } 00:38:44.258 ] 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "subsystem": "vmd", 00:38:44.258 "config": [] 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "subsystem": "accel", 00:38:44.258 "config": [ 00:38:44.258 { 00:38:44.258 "method": "accel_set_options", 00:38:44.258 "params": { 00:38:44.258 "small_cache_size": 128, 00:38:44.258 "large_cache_size": 16, 00:38:44.258 "task_count": 2048, 00:38:44.258 "sequence_count": 2048, 00:38:44.258 "buf_count": 2048 00:38:44.258 } 00:38:44.258 } 00:38:44.258 ] 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "subsystem": "bdev", 00:38:44.258 "config": [ 00:38:44.258 { 00:38:44.258 "method": "bdev_set_options", 00:38:44.258 "params": { 00:38:44.258 "bdev_io_pool_size": 65535, 00:38:44.258 "bdev_io_cache_size": 256, 00:38:44.258 "bdev_auto_examine": true, 00:38:44.258 "iobuf_small_cache_size": 128, 00:38:44.258 "iobuf_large_cache_size": 16 00:38:44.258 } 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "method": "bdev_raid_set_options", 00:38:44.258 "params": { 00:38:44.258 "process_window_size_kb": 1024, 00:38:44.258 "process_max_bandwidth_mb_sec": 0 00:38:44.258 } 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "method": "bdev_iscsi_set_options", 00:38:44.258 "params": { 00:38:44.258 "timeout_sec": 30 00:38:44.258 } 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "method": "bdev_nvme_set_options", 00:38:44.258 "params": { 00:38:44.258 "action_on_timeout": "none", 00:38:44.258 "timeout_us": 0, 00:38:44.258 "timeout_admin_us": 0, 00:38:44.258 "keep_alive_timeout_ms": 10000, 00:38:44.258 "arbitration_burst": 0, 00:38:44.258 "low_priority_weight": 0, 00:38:44.258 "medium_priority_weight": 0, 00:38:44.258 "high_priority_weight": 0, 00:38:44.258 "nvme_adminq_poll_period_us": 10000, 00:38:44.258 "nvme_ioq_poll_period_us": 0, 00:38:44.258 "io_queue_requests": 512, 00:38:44.258 "delay_cmd_submit": true, 00:38:44.258 "transport_retry_count": 4, 00:38:44.258 "bdev_retry_count": 3, 00:38:44.258 "transport_ack_timeout": 0, 00:38:44.258 "ctrlr_loss_timeout_sec": 0, 00:38:44.258 "reconnect_delay_sec": 0, 00:38:44.258 "fast_io_fail_timeout_sec": 0, 00:38:44.258 "disable_auto_failback": false, 00:38:44.258 "generate_uuids": false, 00:38:44.258 "transport_tos": 0, 00:38:44.258 "nvme_error_stat": false, 00:38:44.258 "rdma_srq_size": 0, 00:38:44.258 "io_path_stat": false, 00:38:44.258 "allow_accel_sequence": false, 00:38:44.258 "rdma_max_cq_size": 0, 00:38:44.258 "rdma_cm_event_timeout_ms": 0, 00:38:44.258 "dhchap_digests": [ 00:38:44.258 "sha256", 00:38:44.258 "sha384", 00:38:44.258 "sha512" 00:38:44.258 ], 00:38:44.258 "dhchap_dhgroups": [ 00:38:44.258 "null", 00:38:44.258 "ffdhe2048", 00:38:44.258 "ffdhe3072", 00:38:44.258 "ffdhe4096", 00:38:44.258 "ffdhe6144", 00:38:44.258 "ffdhe8192" 00:38:44.258 ] 00:38:44.258 } 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "method": "bdev_nvme_attach_controller", 00:38:44.258 "params": { 00:38:44.258 "name": "nvme0", 00:38:44.258 "trtype": "TCP", 00:38:44.258 "adrfam": "IPv4", 00:38:44.258 "traddr": "127.0.0.1", 00:38:44.258 "trsvcid": "4420", 00:38:44.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:44.258 "prchk_reftag": false, 00:38:44.258 "prchk_guard": false, 00:38:44.258 "ctrlr_loss_timeout_sec": 0, 00:38:44.258 "reconnect_delay_sec": 0, 00:38:44.258 "fast_io_fail_timeout_sec": 0, 00:38:44.258 "psk": "key0", 00:38:44.258 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:44.258 "hdgst": false, 00:38:44.258 "ddgst": false, 00:38:44.258 "multipath": "multipath" 00:38:44.258 } 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "method": "bdev_nvme_set_hotplug", 00:38:44.258 "params": { 00:38:44.258 "period_us": 100000, 00:38:44.258 "enable": false 00:38:44.258 } 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "method": "bdev_wait_for_examine" 00:38:44.258 } 00:38:44.258 ] 00:38:44.258 }, 00:38:44.258 { 00:38:44.258 "subsystem": "nbd", 00:38:44.259 "config": [] 00:38:44.259 } 00:38:44.259 ] 00:38:44.259 }' 00:38:44.259 11:37:50 keyring_file -- keyring/file.sh@115 -- # killprocess 3780885 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3780885 ']' 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3780885 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3780885 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3780885' 00:38:44.259 killing process with pid 3780885 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@973 -- # kill 3780885 00:38:44.259 Received shutdown signal, test time was about 1.000000 seconds 00:38:44.259 00:38:44.259 Latency(us) 00:38:44.259 [2024-12-06T10:37:50.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:44.259 [2024-12-06T10:37:50.426Z] =================================================================================================================== 00:38:44.259 [2024-12-06T10:37:50.426Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@978 -- # wait 3780885 00:38:44.259 11:37:50 keyring_file -- keyring/file.sh@118 -- # bperfpid=3782614 00:38:44.259 11:37:50 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3782614 /var/tmp/bperf.sock 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3782614 ']' 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:44.259 11:37:50 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:44.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:44.259 11:37:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:44.259 11:37:50 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:44.259 "subsystems": [ 00:38:44.259 { 00:38:44.259 "subsystem": "keyring", 00:38:44.259 "config": [ 00:38:44.259 { 00:38:44.259 "method": "keyring_file_add_key", 00:38:44.259 "params": { 00:38:44.259 "name": "key0", 00:38:44.259 "path": "/tmp/tmp.WwyeNmBRcn" 00:38:44.259 } 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "method": "keyring_file_add_key", 00:38:44.259 "params": { 00:38:44.259 "name": "key1", 00:38:44.259 "path": "/tmp/tmp.cvBYwiTYEo" 00:38:44.259 } 00:38:44.259 } 00:38:44.259 ] 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "subsystem": "iobuf", 00:38:44.259 "config": [ 00:38:44.259 { 00:38:44.259 "method": "iobuf_set_options", 00:38:44.259 "params": { 00:38:44.259 "small_pool_count": 8192, 00:38:44.259 "large_pool_count": 1024, 00:38:44.259 "small_bufsize": 8192, 00:38:44.259 "large_bufsize": 135168, 00:38:44.259 "enable_numa": false 00:38:44.259 } 00:38:44.259 } 00:38:44.259 ] 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "subsystem": "sock", 00:38:44.259 "config": [ 00:38:44.259 { 00:38:44.259 "method": "sock_set_default_impl", 00:38:44.259 "params": { 00:38:44.259 "impl_name": "posix" 00:38:44.259 } 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "method": "sock_impl_set_options", 00:38:44.259 "params": { 00:38:44.259 "impl_name": "ssl", 00:38:44.259 "recv_buf_size": 4096, 00:38:44.259 "send_buf_size": 4096, 00:38:44.259 "enable_recv_pipe": true, 00:38:44.259 "enable_quickack": false, 00:38:44.259 "enable_placement_id": 0, 00:38:44.259 "enable_zerocopy_send_server": true, 00:38:44.259 "enable_zerocopy_send_client": false, 00:38:44.259 "zerocopy_threshold": 0, 00:38:44.259 "tls_version": 0, 00:38:44.259 "enable_ktls": false 00:38:44.259 } 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "method": "sock_impl_set_options", 00:38:44.259 "params": { 00:38:44.259 "impl_name": "posix", 00:38:44.259 "recv_buf_size": 2097152, 00:38:44.259 "send_buf_size": 2097152, 00:38:44.259 "enable_recv_pipe": true, 00:38:44.259 "enable_quickack": false, 00:38:44.259 "enable_placement_id": 0, 00:38:44.259 "enable_zerocopy_send_server": true, 00:38:44.259 "enable_zerocopy_send_client": false, 00:38:44.259 "zerocopy_threshold": 0, 00:38:44.259 "tls_version": 0, 00:38:44.259 "enable_ktls": false 00:38:44.259 } 00:38:44.259 } 00:38:44.259 ] 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "subsystem": "vmd", 00:38:44.259 "config": [] 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "subsystem": "accel", 00:38:44.259 "config": [ 00:38:44.259 { 00:38:44.259 "method": "accel_set_options", 00:38:44.259 "params": { 00:38:44.259 "small_cache_size": 128, 00:38:44.259 "large_cache_size": 16, 00:38:44.259 "task_count": 2048, 00:38:44.259 "sequence_count": 2048, 00:38:44.259 "buf_count": 2048 00:38:44.259 } 00:38:44.259 } 00:38:44.259 ] 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "subsystem": "bdev", 00:38:44.259 "config": [ 00:38:44.259 { 00:38:44.259 "method": "bdev_set_options", 00:38:44.259 "params": { 00:38:44.259 "bdev_io_pool_size": 65535, 00:38:44.259 "bdev_io_cache_size": 256, 00:38:44.259 "bdev_auto_examine": true, 00:38:44.259 "iobuf_small_cache_size": 128, 00:38:44.259 "iobuf_large_cache_size": 16 00:38:44.259 } 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "method": "bdev_raid_set_options", 00:38:44.259 "params": { 00:38:44.259 "process_window_size_kb": 1024, 00:38:44.259 "process_max_bandwidth_mb_sec": 0 00:38:44.259 } 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "method": "bdev_iscsi_set_options", 00:38:44.259 "params": { 00:38:44.259 "timeout_sec": 30 00:38:44.259 } 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "method": "bdev_nvme_set_options", 00:38:44.259 "params": { 00:38:44.259 "action_on_timeout": "none", 00:38:44.259 "timeout_us": 0, 00:38:44.259 "timeout_admin_us": 0, 00:38:44.259 "keep_alive_timeout_ms": 10000, 00:38:44.259 "arbitration_burst": 0, 00:38:44.259 "low_priority_weight": 0, 00:38:44.259 "medium_priority_weight": 0, 00:38:44.259 "high_priority_weight": 0, 00:38:44.259 "nvme_adminq_poll_period_us": 10000, 00:38:44.259 "nvme_ioq_poll_period_us": 0, 00:38:44.259 "io_queue_requests": 512, 00:38:44.259 "delay_cmd_submit": true, 00:38:44.259 "transport_retry_count": 4, 00:38:44.259 "bdev_retry_count": 3, 00:38:44.259 "transport_ack_timeout": 0, 00:38:44.259 "ctrlr_loss_timeout_sec": 0, 00:38:44.259 "reconnect_delay_sec": 0, 00:38:44.259 "fast_io_fail_timeout_sec": 0, 00:38:44.259 "disable_auto_failback": false, 00:38:44.259 "generate_uuids": false, 00:38:44.259 "transport_tos": 0, 00:38:44.259 "nvme_error_stat": false, 00:38:44.259 "rdma_srq_size": 0, 00:38:44.259 "io_path_stat": false, 00:38:44.259 "allow_accel_sequence": false, 00:38:44.259 "rdma_max_cq_size": 0, 00:38:44.259 "rdma_cm_event_timeout_ms": 0, 00:38:44.259 "dhchap_digests": [ 00:38:44.259 "sha256", 00:38:44.259 "sha384", 00:38:44.259 "sha512" 00:38:44.259 ], 00:38:44.259 "dhchap_dhgroups": [ 00:38:44.259 "null", 00:38:44.259 "ffdhe2048", 00:38:44.259 "ffdhe3072", 00:38:44.259 "ffdhe4096", 00:38:44.259 "ffdhe6144", 00:38:44.259 "ffdhe8192" 00:38:44.259 ] 00:38:44.259 } 00:38:44.259 }, 00:38:44.259 { 00:38:44.259 "method": "bdev_nvme_attach_controller", 00:38:44.259 "params": { 00:38:44.259 "name": "nvme0", 00:38:44.259 "trtype": "TCP", 00:38:44.259 "adrfam": "IPv4", 00:38:44.259 "traddr": "127.0.0.1", 00:38:44.259 "trsvcid": "4420", 00:38:44.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:44.259 "prchk_reftag": false, 00:38:44.259 "prchk_guard": false, 00:38:44.259 "ctrlr_loss_timeout_sec": 0, 00:38:44.259 "reconnect_delay_sec": 0, 00:38:44.259 "fast_io_fail_timeout_sec": 0, 00:38:44.259 "psk": "key0", 00:38:44.259 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:44.259 "hdgst": false, 00:38:44.260 "ddgst": false, 00:38:44.260 "multipath": "multipath" 00:38:44.260 } 00:38:44.260 }, 00:38:44.260 { 00:38:44.260 "method": "bdev_nvme_set_hotplug", 00:38:44.260 "params": { 00:38:44.260 "period_us": 100000, 00:38:44.260 "enable": false 00:38:44.260 } 00:38:44.260 }, 00:38:44.260 { 00:38:44.260 "method": "bdev_wait_for_examine" 00:38:44.260 } 00:38:44.260 ] 00:38:44.260 }, 00:38:44.260 { 00:38:44.260 "subsystem": "nbd", 00:38:44.260 "config": [] 00:38:44.260 } 00:38:44.260 ] 00:38:44.260 }' 00:38:44.260 [2024-12-06 11:37:50.418768] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:38:44.260 [2024-12-06 11:37:50.418829] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782614 ] 00:38:44.519 [2024-12-06 11:37:50.508122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.519 [2024-12-06 11:37:50.536963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:44.519 [2024-12-06 11:37:50.681329] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:45.090 11:37:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:45.090 11:37:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:45.090 11:37:51 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:45.090 11:37:51 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:45.090 11:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.350 11:37:51 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:45.350 11:37:51 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:45.350 11:37:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:45.350 11:37:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.350 11:37:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.350 11:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.350 11:37:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:45.610 11:37:51 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:45.610 11:37:51 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:45.610 11:37:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:45.610 11:37:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.610 11:37:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.610 11:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.610 11:37:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:45.610 11:37:51 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:45.610 11:37:51 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:45.610 11:37:51 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:45.610 11:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:45.872 11:37:51 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:45.872 11:37:51 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:45.872 11:37:51 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.WwyeNmBRcn /tmp/tmp.cvBYwiTYEo 00:38:45.872 11:37:51 keyring_file -- keyring/file.sh@20 -- # killprocess 3782614 00:38:45.872 11:37:51 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3782614 ']' 00:38:45.872 11:37:51 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3782614 00:38:45.872 11:37:51 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:45.872 11:37:51 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:45.872 11:37:51 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3782614 00:38:45.872 11:37:51 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:45.872 11:37:51 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:45.872 11:37:51 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3782614' 00:38:45.872 killing process with pid 3782614 00:38:45.872 11:37:51 keyring_file -- common/autotest_common.sh@973 -- # kill 3782614 00:38:45.872 Received shutdown signal, test time was about 1.000000 seconds 00:38:45.872 00:38:45.872 Latency(us) 00:38:45.872 [2024-12-06T10:37:52.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.872 [2024-12-06T10:37:52.039Z] =================================================================================================================== 00:38:45.872 [2024-12-06T10:37:52.039Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:45.872 11:37:51 keyring_file -- common/autotest_common.sh@978 -- # wait 3782614 00:38:46.134 11:37:52 keyring_file -- keyring/file.sh@21 -- # killprocess 3780783 00:38:46.134 11:37:52 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3780783 ']' 00:38:46.134 11:37:52 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3780783 00:38:46.134 11:37:52 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:46.134 11:37:52 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:46.134 11:37:52 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3780783 00:38:46.134 11:37:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:46.134 11:37:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:46.134 11:37:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3780783' 00:38:46.134 killing process with pid 3780783 00:38:46.134 11:37:52 keyring_file -- common/autotest_common.sh@973 -- # kill 3780783 00:38:46.134 11:37:52 keyring_file -- common/autotest_common.sh@978 -- # wait 3780783 00:38:46.395 00:38:46.395 real 0m11.800s 00:38:46.395 user 0m28.364s 00:38:46.395 sys 0m2.677s 00:38:46.395 11:37:52 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:46.395 11:37:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:46.395 ************************************ 00:38:46.395 END TEST keyring_file 00:38:46.395 ************************************ 00:38:46.395 11:37:52 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:46.395 11:37:52 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:46.395 11:37:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:46.395 11:37:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:46.395 11:37:52 -- common/autotest_common.sh@10 -- # set +x 00:38:46.395 ************************************ 00:38:46.395 START TEST keyring_linux 00:38:46.395 ************************************ 00:38:46.395 11:37:52 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:46.395 Joined session keyring: 636996747 00:38:46.395 * Looking for test storage... 00:38:46.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:46.395 11:37:52 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:46.395 11:37:52 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:38:46.395 11:37:52 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:46.656 11:37:52 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:46.656 11:37:52 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:46.657 11:37:52 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:46.657 11:37:52 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:46.657 11:37:52 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:46.657 11:37:52 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:46.657 11:37:52 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:46.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.657 --rc genhtml_branch_coverage=1 00:38:46.657 --rc genhtml_function_coverage=1 00:38:46.657 --rc genhtml_legend=1 00:38:46.657 --rc geninfo_all_blocks=1 00:38:46.657 --rc geninfo_unexecuted_blocks=1 00:38:46.657 00:38:46.657 ' 00:38:46.657 11:37:52 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:46.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.657 --rc genhtml_branch_coverage=1 00:38:46.657 --rc genhtml_function_coverage=1 00:38:46.657 --rc genhtml_legend=1 00:38:46.657 --rc geninfo_all_blocks=1 00:38:46.657 --rc geninfo_unexecuted_blocks=1 00:38:46.657 00:38:46.657 ' 00:38:46.657 11:37:52 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:46.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.657 --rc genhtml_branch_coverage=1 00:38:46.657 --rc genhtml_function_coverage=1 00:38:46.657 --rc genhtml_legend=1 00:38:46.657 --rc geninfo_all_blocks=1 00:38:46.657 --rc geninfo_unexecuted_blocks=1 00:38:46.657 00:38:46.657 ' 00:38:46.657 11:37:52 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:46.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.657 --rc genhtml_branch_coverage=1 00:38:46.657 --rc genhtml_function_coverage=1 00:38:46.657 --rc genhtml_legend=1 00:38:46.657 --rc geninfo_all_blocks=1 00:38:46.657 --rc geninfo_unexecuted_blocks=1 00:38:46.657 00:38:46.657 ' 00:38:46.657 11:37:52 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:46.657 11:37:52 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:46.657 11:37:52 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.657 11:37:52 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.657 11:37:52 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.657 11:37:52 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.657 11:37:52 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.657 11:37:52 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.657 11:37:52 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:46.657 11:37:52 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:46.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:46.657 11:37:52 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:46.657 11:37:52 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:46.657 11:37:52 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:46.657 11:37:52 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:46.657 11:37:52 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:46.657 11:37:52 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:46.657 /tmp/:spdk-test:key0 00:38:46.657 11:37:52 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:46.657 11:37:52 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:46.657 11:37:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:46.657 /tmp/:spdk-test:key1 00:38:46.657 11:37:52 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3783130 00:38:46.657 11:37:52 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3783130 00:38:46.657 11:37:52 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:46.657 11:37:52 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3783130 ']' 00:38:46.657 11:37:52 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:46.657 11:37:52 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:46.657 11:37:52 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:46.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:46.657 11:37:52 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:46.657 11:37:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:46.657 [2024-12-06 11:37:52.772367] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:38:46.657 [2024-12-06 11:37:52.772427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783130 ] 00:38:46.945 [2024-12-06 11:37:52.848984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.945 [2024-12-06 11:37:52.885532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.615 11:37:53 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:47.615 11:37:53 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:47.615 11:37:53 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:47.615 11:37:53 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.615 11:37:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:47.615 [2024-12-06 11:37:53.556647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:47.615 null0 00:38:47.615 [2024-12-06 11:37:53.588699] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:47.615 [2024-12-06 11:37:53.589106] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:47.615 11:37:53 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.615 11:37:53 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:47.615 510116992 00:38:47.615 11:37:53 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:47.615 209464127 00:38:47.615 11:37:53 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3783390 00:38:47.615 11:37:53 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3783390 /var/tmp/bperf.sock 00:38:47.615 11:37:53 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:47.615 11:37:53 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3783390 ']' 00:38:47.615 11:37:53 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:47.615 11:37:53 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:47.615 11:37:53 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:47.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:47.615 11:37:53 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:47.615 11:37:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:47.615 [2024-12-06 11:37:53.664954] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:38:47.615 [2024-12-06 11:37:53.665003] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783390 ] 00:38:47.615 [2024-12-06 11:37:53.753072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.876 [2024-12-06 11:37:53.783251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:48.446 11:37:54 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:48.446 11:37:54 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:48.446 11:37:54 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:48.446 11:37:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:48.446 11:37:54 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:48.446 11:37:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:48.706 11:37:54 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:48.706 11:37:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:48.966 [2024-12-06 11:37:54.980797] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:48.966 nvme0n1 00:38:48.966 11:37:55 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:48.966 11:37:55 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:48.966 11:37:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:48.966 11:37:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:48.966 11:37:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:48.966 11:37:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:49.247 11:37:55 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:49.248 11:37:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:49.248 11:37:55 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:49.248 11:37:55 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:49.248 11:37:55 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:49.248 11:37:55 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:49.248 11:37:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:49.508 11:37:55 keyring_linux -- keyring/linux.sh@25 -- # sn=510116992 00:38:49.508 11:37:55 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:49.508 11:37:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:49.508 11:37:55 keyring_linux -- keyring/linux.sh@26 -- # [[ 510116992 == \5\1\0\1\1\6\9\9\2 ]] 00:38:49.508 11:37:55 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 510116992 00:38:49.508 11:37:55 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:49.508 11:37:55 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:49.508 Running I/O for 1 seconds... 00:38:50.448 16286.00 IOPS, 63.62 MiB/s 00:38:50.448 Latency(us) 00:38:50.448 [2024-12-06T10:37:56.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.448 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:50.448 nvme0n1 : 1.01 16285.49 63.62 0.00 0.00 7825.64 6144.00 13107.20 00:38:50.448 [2024-12-06T10:37:56.615Z] =================================================================================================================== 00:38:50.448 [2024-12-06T10:37:56.615Z] Total : 16285.49 63.62 0.00 0.00 7825.64 6144.00 13107.20 00:38:50.448 { 00:38:50.448 "results": [ 00:38:50.448 { 00:38:50.448 "job": "nvme0n1", 00:38:50.448 "core_mask": "0x2", 00:38:50.448 "workload": "randread", 00:38:50.448 "status": "finished", 00:38:50.448 "queue_depth": 128, 00:38:50.448 "io_size": 4096, 00:38:50.448 "runtime": 1.007891, 00:38:50.448 "iops": 16285.491189027385, 00:38:50.448 "mibps": 63.61519995713822, 00:38:50.448 "io_failed": 0, 00:38:50.448 "io_timeout": 0, 00:38:50.448 "avg_latency_us": 7825.644982738314, 00:38:50.448 "min_latency_us": 6144.0, 00:38:50.448 "max_latency_us": 13107.2 00:38:50.448 } 00:38:50.448 ], 00:38:50.448 "core_count": 1 00:38:50.448 } 00:38:50.448 11:37:56 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:50.448 11:37:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:50.706 11:37:56 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:50.706 11:37:56 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:50.706 11:37:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:50.706 11:37:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:50.707 11:37:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:50.707 11:37:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.967 11:37:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:50.967 11:37:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:50.967 11:37:56 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:50.967 11:37:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:50.967 11:37:56 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:50.967 11:37:56 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:50.967 11:37:56 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:50.967 11:37:56 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:50.967 11:37:56 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:50.967 11:37:56 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:50.967 11:37:56 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:50.967 11:37:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:50.967 [2024-12-06 11:37:57.060131] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:50.967 [2024-12-06 11:37:57.060873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e03e0 (107): Transport endpoint is not connected 00:38:50.967 [2024-12-06 11:37:57.061870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e03e0 (9): Bad file descriptor 00:38:50.967 [2024-12-06 11:37:57.062871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:50.967 [2024-12-06 11:37:57.062880] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:50.967 [2024-12-06 11:37:57.062885] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:50.967 [2024-12-06 11:37:57.062892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:50.967 request: 00:38:50.967 { 00:38:50.967 "name": "nvme0", 00:38:50.967 "trtype": "tcp", 00:38:50.967 "traddr": "127.0.0.1", 00:38:50.967 "adrfam": "ipv4", 00:38:50.967 "trsvcid": "4420", 00:38:50.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:50.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:50.967 "prchk_reftag": false, 00:38:50.967 "prchk_guard": false, 00:38:50.967 "hdgst": false, 00:38:50.967 "ddgst": false, 00:38:50.967 "psk": ":spdk-test:key1", 00:38:50.967 "allow_unrecognized_csi": false, 00:38:50.967 "method": "bdev_nvme_attach_controller", 00:38:50.967 "req_id": 1 00:38:50.967 } 00:38:50.967 Got JSON-RPC error response 00:38:50.967 response: 00:38:50.967 { 00:38:50.967 "code": -5, 00:38:50.967 "message": "Input/output error" 00:38:50.967 } 00:38:50.967 11:37:57 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:50.967 11:37:57 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:50.967 11:37:57 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:50.967 11:37:57 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@33 -- # sn=510116992 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 510116992 00:38:50.967 1 links removed 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@33 -- # sn=209464127 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 209464127 00:38:50.967 1 links removed 00:38:50.967 11:37:57 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3783390 00:38:50.967 11:37:57 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3783390 ']' 00:38:50.967 11:37:57 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3783390 00:38:50.967 11:37:57 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:50.967 11:37:57 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:50.967 11:37:57 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3783390 00:38:51.228 11:37:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:51.228 11:37:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:51.228 11:37:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3783390' 00:38:51.228 killing process with pid 3783390 00:38:51.228 11:37:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 3783390 00:38:51.228 Received shutdown signal, test time was about 1.000000 seconds 00:38:51.228 00:38:51.228 Latency(us) 00:38:51.228 [2024-12-06T10:37:57.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:51.228 [2024-12-06T10:37:57.395Z] =================================================================================================================== 00:38:51.228 [2024-12-06T10:37:57.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:51.229 11:37:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 3783390 00:38:51.229 11:37:57 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3783130 00:38:51.229 11:37:57 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3783130 ']' 00:38:51.229 11:37:57 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3783130 00:38:51.229 11:37:57 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:51.229 11:37:57 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:51.229 11:37:57 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3783130 00:38:51.229 11:37:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:51.229 11:37:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:51.229 11:37:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3783130' 00:38:51.229 killing process with pid 3783130 00:38:51.229 11:37:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 3783130 00:38:51.229 11:37:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 3783130 00:38:51.488 00:38:51.488 real 0m5.129s 00:38:51.488 user 0m9.474s 00:38:51.488 sys 0m1.379s 00:38:51.488 11:37:57 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:51.488 11:37:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:51.488 ************************************ 00:38:51.488 END TEST keyring_linux 00:38:51.488 ************************************ 00:38:51.488 11:37:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:51.488 11:37:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:51.488 11:37:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:51.488 11:37:57 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:51.488 11:37:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:51.488 11:37:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:51.488 11:37:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:51.488 11:37:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:51.488 11:37:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:51.488 11:37:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:51.488 11:37:57 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:51.488 11:37:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:51.488 11:37:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:51.488 11:37:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:51.488 11:37:57 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:51.488 11:37:57 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:51.488 11:37:57 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:51.488 11:37:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:51.488 11:37:57 -- common/autotest_common.sh@10 -- # set +x 00:38:51.488 11:37:57 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:51.488 11:37:57 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:51.488 11:37:57 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:51.488 11:37:57 -- common/autotest_common.sh@10 -- # set +x 00:38:59.647 INFO: APP EXITING 00:38:59.647 INFO: killing all VMs 00:38:59.647 INFO: killing vhost app 00:38:59.647 INFO: EXIT DONE 00:39:02.945 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:02.945 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:02.945 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:07.152 Cleaning 00:39:07.152 Removing: /var/run/dpdk/spdk0/config 00:39:07.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:07.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:07.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:07.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:07.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:07.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:07.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:07.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:07.152 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:07.152 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:07.152 Removing: /var/run/dpdk/spdk1/config 00:39:07.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:07.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:07.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:07.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:07.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:07.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:07.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:07.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:07.152 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:07.152 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:07.152 Removing: /var/run/dpdk/spdk2/config 00:39:07.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:07.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:07.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:07.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:07.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:07.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:07.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:07.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:07.152 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:07.152 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:07.152 Removing: /var/run/dpdk/spdk3/config 00:39:07.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:07.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:07.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:07.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:07.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:07.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:07.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:07.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:07.152 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:07.152 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:07.152 Removing: /var/run/dpdk/spdk4/config 00:39:07.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:07.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:07.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:07.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:07.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:07.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:07.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:07.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:07.152 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:07.152 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:07.152 Removing: /dev/shm/bdev_svc_trace.1 00:39:07.152 Removing: /dev/shm/nvmf_trace.0 00:39:07.152 Removing: /dev/shm/spdk_tgt_trace.pid3167299 00:39:07.152 Removing: /var/run/dpdk/spdk0 00:39:07.152 Removing: /var/run/dpdk/spdk1 00:39:07.152 Removing: /var/run/dpdk/spdk2 00:39:07.152 Removing: /var/run/dpdk/spdk3 00:39:07.152 Removing: /var/run/dpdk/spdk4 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3165619 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3167299 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3167953 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3169004 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3169332 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3170434 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3170731 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3171074 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3172043 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3172795 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3173191 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3173593 00:39:07.152 Removing: /var/run/dpdk/spdk_pid3174004 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3174403 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3174573 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3174900 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3175287 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3176615 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3180383 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3180755 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3181130 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3181302 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3181841 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3181867 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3182410 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3182563 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3182924 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3183069 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3183298 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3183531 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3184082 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3184364 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3184664 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3189732 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3195613 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3208190 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3208876 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3214636 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3215102 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3220908 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3228609 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3232174 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3245729 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3257817 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3259831 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3260856 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3283247 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3289221 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3350436 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3357323 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3365171 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3373450 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3373481 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3374483 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3375536 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3376600 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3377221 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3377379 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3377615 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3377805 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3377808 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3378813 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3379819 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3380829 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3381498 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3381501 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3381839 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3383274 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3384680 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3395588 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3431799 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3438150 00:39:07.413 Removing: /var/run/dpdk/spdk_pid3440150 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3442165 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3442467 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3442520 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3442711 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3443268 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3445595 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3446633 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3447046 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3449759 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3450461 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3451199 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3456876 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3463983 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3463984 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3463985 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3469326 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3480630 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3486007 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3493923 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3495424 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3497221 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3498795 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3504864 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3510691 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3516311 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3526521 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3526529 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3531934 00:39:07.674 Removing: /var/run/dpdk/spdk_pid3532263 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3532599 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3532944 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3533026 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3539178 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3539943 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3546248 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3549591 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3556670 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3563705 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3574393 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3583635 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3583637 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3609256 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3610126 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3610926 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3611607 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3612670 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3613361 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3614053 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3614868 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3620488 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3620810 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3628515 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3628892 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3635718 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3641411 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3654008 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3654684 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3660331 00:39:07.675 Removing: /var/run/dpdk/spdk_pid3660757 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3666160 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3673571 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3676539 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3689799 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3701491 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3703851 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3704902 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3726127 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3731205 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3734403 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3742496 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3742501 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3748885 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3751373 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3753684 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3755182 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3757898 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3759360 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3770245 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3770912 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3771579 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3774648 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3775237 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3775683 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3780783 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3780885 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3782614 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3783130 00:39:07.936 Removing: /var/run/dpdk/spdk_pid3783390 00:39:07.936 Clean 00:39:07.936 11:38:14 -- common/autotest_common.sh@1453 -- # return 0 00:39:07.936 11:38:14 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:07.936 11:38:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:07.936 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:39:08.196 11:38:14 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:08.196 11:38:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.196 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:39:08.196 11:38:14 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:08.196 11:38:14 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:08.197 11:38:14 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:08.197 11:38:14 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:08.197 11:38:14 -- spdk/autotest.sh@398 -- # hostname 00:39:08.197 11:38:14 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:08.197 geninfo: WARNING: invalid characters removed from testname! 00:39:34.775 11:38:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:36.161 11:38:42 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:38.078 11:38:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:39.464 11:38:45 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:41.376 11:38:47 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:42.758 11:38:48 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:44.670 11:38:50 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:44.670 11:38:50 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:44.670 11:38:50 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:44.670 11:38:50 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:44.670 11:38:50 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:44.670 11:38:50 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:44.670 + [[ -n 3079959 ]] 00:39:44.670 + sudo kill 3079959 00:39:44.679 [Pipeline] } 00:39:44.692 [Pipeline] // stage 00:39:44.697 [Pipeline] } 00:39:44.711 [Pipeline] // timeout 00:39:44.715 [Pipeline] } 00:39:44.727 [Pipeline] // catchError 00:39:44.731 [Pipeline] } 00:39:44.748 [Pipeline] // wrap 00:39:44.752 [Pipeline] } 00:39:44.760 [Pipeline] // catchError 00:39:44.768 [Pipeline] stage 00:39:44.770 [Pipeline] { (Epilogue) 00:39:44.780 [Pipeline] catchError 00:39:44.781 [Pipeline] { 00:39:44.793 [Pipeline] echo 00:39:44.795 Cleanup processes 00:39:44.800 [Pipeline] sh 00:39:45.089 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:45.089 3796766 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:45.103 [Pipeline] sh 00:39:45.389 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:45.389 ++ grep -v 'sudo pgrep' 00:39:45.389 ++ awk '{print $1}' 00:39:45.389 + sudo kill -9 00:39:45.389 + true 00:39:45.402 [Pipeline] sh 00:39:45.693 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:57.967 [Pipeline] sh 00:39:58.258 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:58.258 Artifacts sizes are good 00:39:58.274 [Pipeline] archiveArtifacts 00:39:58.282 Archiving artifacts 00:39:58.459 [Pipeline] sh 00:39:58.839 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:58.855 [Pipeline] cleanWs 00:39:58.866 [WS-CLEANUP] Deleting project workspace... 00:39:58.866 [WS-CLEANUP] Deferred wipeout is used... 00:39:58.874 [WS-CLEANUP] done 00:39:58.876 [Pipeline] } 00:39:58.894 [Pipeline] // catchError 00:39:58.906 [Pipeline] sh 00:39:59.194 + logger -p user.info -t JENKINS-CI 00:39:59.204 [Pipeline] } 00:39:59.216 [Pipeline] // stage 00:39:59.220 [Pipeline] } 00:39:59.233 [Pipeline] // node 00:39:59.238 [Pipeline] End of Pipeline 00:39:59.264 Finished: SUCCESS